Load Balancing
Load balancing is a mechanism to distribute workloads between several (backend) systems, typically servers or applications, with the goal to make the offered service more performant and scalable.
Load Balancing as a Service (LBaaS) described here offers IP based load balancing - as an alternative to DNS based load balancing -, and the documentation will cover how to create, configure, and manage a load balancer in the CERN cloud.
The upstream documentation is available for additional details.
Service Load Balancing vs Service Availability
Load-balancing can help to increase service availability, but it is not by itself a mechanism to build highly available applications. For instance, the LBaaS offering described here relies on a single load balancer instance (a virtual machine) through which traffic is routed. The unavailability of this instance will render the backend service inaccessible.
Access to Load Balancers
All shared projects have access to the service but have a default quota of 0.
There is no quota for personal projects for load balancing.
Quota for shared projects can be requested through the standard quota update request form (in the LoadBalancer section).
Concepts
- Load Balancer: load balancer instance occupies a neutron network port and has an IP address assigned from a subnet.
- Listener: load balancers can listen for requests on multiple ports. Each one of those ports is specified by a listener.
- Pool: a pool holds a list of members that serve content through the load balancer. Pools are attached to listeners.
- Member: a load balancer backend, member of a pool.
- Health monitor: the health monitor keeps track of healthy members in a pool.
+---------------+
| |
| Load Balancer |
| 137.138.6.18 |
| |
+-------+-------+
|
+-------------+--------------+
| |
| |
+------v-------+ +--------v-------+
| | | |
| Listener | | Listener |
| Port 80/HTTP | | Port 443/HTTPS |
| | | |
+------+-------+ +--------+-------+
| |
| |
| |
+-----------------+ +---v----+ +---v----+ +-----------------+
| | | | | | | |
| Health Monitor +-------+ Pool 1 | | Pool 2 +------+ Health Monitor |
| | | | | | | |
+-----------------+ +---+----+ +---+----+ +-----------------+
| |
| |
| |
+---------------------------v----------+ +---------v----------------------------+
| | | |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| |Member 1| |Member 2| --- |Member N| | | |Member 1| |Member 2| --- |Member N| |
| +--------+ +--------+ +--------+ | | +--------+ +--------+ +--------+ |
| | | |
+--------------------------------------+ +--------------------------------------+
In the example below we use:
- mylb as the load balancer name
- mylistener as the listener name
- mypool as mylistener's default pool
- myhealthmonitor as mypool's health monitor
- 137.138.53.95 and 188.185.80.141 as the IPs of the backends (members)
Basic HTTP Load Balancer
The network id should always be CERN_NETWORK.
In this case we're setting up a basic HTTP based load balancer on port 80.
After creating a loadbalancer instance, we will test the network reachability by sending ICMP ping requests. In worst case, loadbalancer instance will respond 60 seconds after creation.
Next we create a loadbalancer listener for protocol HTTP and port 80.
openstack loadbalancer listener create --name mylistener \
--protocol HTTP \
--protocol-port 80 mylb
Next we create a pool to host the members, specifying the load balancing algorithm as ROUND_ROBIN. Supported options for load balancing algorithm are ROUND_ROBIN, SOURCE_IP, SOURCE_IP_PORT and LEAST_CONNECTIONS.
openstack loadbalancer pool create --name mypool \
--lb-algorithm ROUND_ROBIN \
--listener mylistener \
--protocol HTTP
Next we add the loadbalancer members. The port is the port the backend is listening on, which may be different from the port of the listener above.
openstack loadbalancer member create --name server-1 --address 137.138.53.95 --protocol-port 80 mypool
openstack loadbalancer member create --name server-2 --address 188.185.80.141 --protocol-port 80 mypool
Next, we create an HTTP type health monitor, our back-end servers have been configured with a health check at the URL path /healthcheck. Supported types for health monitors are HTTP, HTTPS, and TCP. In the case of a TCP type health monitor, TCP service port for backend servers is periodically probed. Please note that the health monitor is an optional resource.
openstack loadbalancer healthmonitor create --name http-monitor \
--delay 7 \
--timeout 5 \
--max-retries 3 \
--url-path /healthcheck \
--expected-codes 200,201 \
--type HTTP mypool
TCP type healthmonitor can be created by using the following command.
openstack loadbalancer healthmonitor create --name tcp-monitor \
--delay 7 \
--max-retries 3 \
--timeout 5 \
--type TCP mypool
Finally, we can verify our loadbalancer by sending requests to virtual IP.
TCP Load Balancer
This is generally suitable when load balancing a non-HTTP TCP-based service. The following example creates a load balancer for ssh connections. Note that the load balancer listens for ssh connections on port 5555 and backend servers are using port 22 (you can use other ports as appropriate).
openstack loadbalancer create --name lb --vip-network-id CERN_NETWORK
openstack loadbalancer listener create --name tcp-listener --protocol TCP --protocol-port 5555 lb
openstack loadbalancer pool create --name tcp-pool --lb-algorithm ROUND_ROBIN --listener tcp-listener --protocol TCP --session-persistence type=SOURCE_IP
openstack loadbalancer member create --name server-1 --address 137.138.53.95 --protocol-port 22 tcp-pool
openstack loadbalancer member create --name server-2 --address 188.185.80.141 --protocol-port 22 tcp-pool
#137.138.6.16 is load balancer's IP address
$ ssh root@137.138.6.16 -p 5555
Last login: Mon May 25 10:55:56 2020 from lbaas-69e19c65-6d30-48f6-a3e3-04ffe7442a54.cern.ch
[root@delete-me ~]# hostname -i
188.185.80.141
UDP Load Balancer
UDP load balancing is currently experimental.
TLS termination
Please follow the upstream cookbook.
Preserve the client-ip for SSL Passthrough / TCP LoadBalancers
For use-cases where you want to do TLS termination on the backend or non-HTTP applications, you can use the TCP protocol for both listener and pool. In this mode, the load balancer can't insert headers (for HTTP applications) to indicate the client's IP address. Therefore for backend servers, traffic will appear to originate from the load balancer.
To preseve the client-ip, the PROXY protocol is proposed and many applications support it. You can create a TCP listener and a pool with protocol PROXY. Note that your backend application must support the PROXY protocol.
There is also support for the PROXYV2 protocol, which uses binary headers.
Kubernetes Service Type LoadBalancer
Check the corresponding kubernetes service documentation.
For troubleshooting Kubernetes Service Type LoadBalancer, there is also information in the kubernetes troubleshooting documentation.
Automatic population of members with puppet servers
At the moment there is no service automatically adding or removing nodes when you add them to a hostgroup. However, we prepared a python script that can be run in aiadm (or similar machines with access to the puppet database) in the openstack project of the loadbalancer.
You have to manually set up the loadbalancer, listener, healthmanager and pool. After that with the pool having a description in the following format:
withHG
being the hostgroup that should be added to that pool and PORT
being the port used for the members.
Example:
$ openstack loadbalancer pool show 44da3db2-dea9-402e-98ca-9ee24b9d2d99 -c name -c description
+-------------+----------------------------------------------------------+
| Field | Value |
+-------------+----------------------------------------------------------+
| description | hostgroup=cloud_lbaas/controller/frontend/sdn3;port=9876 |
| name | port-9876-pool |
+-------------+----------------------------------------------------------+
cloud_lbaas/controller/frontend/sdn3
with member ports 9876.
Additional parameters can be used to specify whether you want to register only IPv4 or/and IPv6 as well as wether you want to include all hosts belonging to the hostgroup and subgroups.
By default the script will not apply the changes but rather reports them to you. With --apply
it will also update the loadbalancer by first adding the new members and then deleting the old ones.
Miscellaneous
Layer 7 Load Balancing
A Layer 7 load balancer can be used to make load balancing decisions based on the URI, host, HTTP headers, and other data in the application message. Please have a look at the L7 load balancing guide to find various use-cases of layer 7 load balancers with examples.
Load balancer Statistics
We are currently working on providing monitoring dashboards via monit-grafana.cern.ch Please follow up with us in case you require regular updates.
Setting Load balancer Session Limit
Concurrent session limit for a load balancer can be set by using the following command. Default value is set to -1.
Setting Member Weights
The weight of a member determines the portion of requests or connections it services compared to the other members in the pool - the load is proportional to the member weight relative to the sum of all weights. The weight value can range between 0 and 256, defaulting to 1.
For further information take a look at the HA Proxy Documentation, and search for weight section.
Setting weight for a new member:
Updating weight for an existing member:
Loadbalancer pool and member names can be found by executing the following commands:
Enabling/Disabling Members
During some maintainance activities you might want to disable some members from serving requests. This feature can help in upgrading services with zero downtime. Following commands can be used to enable/disable loadbalancer members:
openstack loadbalancer member set --enable <pool-name> <member-name>
openstack loadbalancer member set --disable <pool-name> <member-name>
Session Persistence
Session persistence is a feature of the load balancing service. It attempts to force connections or requests in the same session to be processed by the same member as long as it is active. The OpenStack LBaaS service supports three types of persistence:
-
SOURCE_IP:
With this persistence mode, all connections originating from the same source IP address, will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type SOURCE_IP:
-
HTTP_COOKIE:
With this persistence mode, the loadbalancer will create a cookie on the first request from a client. Subsequent requests containing the same cookie value will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type HTTP_COOKIE:
-
APP_COOKIE:
With this persistence mode, the loadbalancer will rely on a cookie established by the backend application. All requests carrying the same cookie value will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type APP_COOKIE:
Backup members
Multiple members can be marked as backups, load balancing will be performed among all backup servers when all normal ones are unavailable. For this feature to work, a healthmonitor resource should be created for the load balancer.
A member can be marked/unmarked as a backup by using the following commands respectively:
openstack loadbalancer member set --enable-backup <pool-name> <member-name>
openstack loadbalancer member set --disable-backup <pool-name> <member-name>
Setting domain name for load balancer
Domain name can be set for a load balancer by adding tags. Following command can be used to set domain name:
Multiple dns aliases can be specified as multiple tags as shown below:
openstack loadbalancer set --tag "landb-alias=my-domain-one" --tag "landb-alias=my-domain-two" --tag "landb-alias=my-domain-three" mylb
Let's say, if you want to remove my-domain-two, then remove the tag with the domain name as shown below:
If you want to remove all dns aliases, then simply remove all landb-alias tags
openstack loadbalancer unset --tag "landb-alias=my-domain-one" --tag "landb-alias=my-domain-three" mylb
Please note that the domain name will be made available after 15 minutes in the worst case, waiting for the update of the DNS servers.
Adding load balancer to LanDB sets
To add the load balancer to LanDB sets, you can add the tag landb-set=YOUR-SET-NAME
to the load balancer.
Note that you will need to configure the LanDB set to allow the load balancer project and our user to have access. See the documentation for the Properties The UUID for the project that needs to be added is: cc059d57-6e98-4688-a3be-aae2b451868b AND the project id for your project.
As an example: The description of your LanDB set should contain something like this:
The egroup for the Set "Responsible" egroup needs to include the "openstack-landb-set-access" egroup as a member. * It's your egroup that needs to include "openstack-landb-set-access". If you set the "openstack-landb-set-access" egroup directly as the Set responsible you lose access to the lanDB Set.
Other annotations for a load balancer
We support multiple of the cern specific properties in Properties.
Since octavia at the moment does not support properties on the load balancer, we use tags for this purpose.
Supported are landb-alias
, landb-set
, landb-mainuser
and landb-ipv6ready
with its corresponding values (e.g. the tag: landb-ipv6ready=true
).
Deleting a load balancer
Load balancer resources should be deleted in the following order:
- Members:
$ openstack loadbalancer member delete <pool-id> <member-id>
- Health monitor:
$ openstack loadbalancer healthmonitor delete <healthmonitor-id>
- Pool:
$ openstack loadbalancer pool delete <pool-id>
- Listener:
$ openstack loadbalancer listener delete <listener-id>
- Loadbalancer:
$ openstack loadbalancer delete <loadbalancer-id>
Alternatively loadbalancers can now be deleted fully with:
Getting a keytab for a load balancer
Note
Please run the following procedure in lxplus/aiadm. Your load balancer must exist in DNS first.
-
Get the computer name from LDAP for your load balancer:
-
Pick a domain controller:
-
Generate the keytab:
$ lb="lbaas-fa7e7f60-805c-49f2-afa2-b7aa180a2a11" $ computer_name="48A9A0-52OA4H23E9M8" # ommit the last $ character $ cerndc="cerndc56.cern.ch" msktutil update -s host --computer-name ${computer_name} --hostname ${lb}.cern.ch --keytab ~/${lb}.keytab --dont-expire-password --dont-update-dnshostname --base OU=Computers --verbose --server ${cerndc} msktutil update -s host --computer-name ${computer_name} --hostname ${lb}.cern.ch --keytab ~/${lb}.keytab --dont-expire-password --dont-update-dnshostname --base OU=Computers --verbose --server ${cerndc} --dont-change-password klist -k ~/${lb}.keytab
All commands in a script:
#!/bin/bash
lb="$1"
if [[ $(host "$lb") = *"not found"* ]] ; then
echo "${lb} does not exist in DNS, exit."
exit 1
fi
computer_name=$(ldapsearch -x -H "ldap://xldap.cern.ch:389" -b "DC=cern,DC=ch" "cn=$lb" sAMAccountName | grep ^sAMAccountName | sed 's/\$//g' | awk '{print $2}')
cerndc=$(dig -t SRV _kerberos._tcp.cern.ch | grep ^cerndc | awk 'NR==1{print $1}')
msktutil update -s host --computer-name ${computer_name} --hostname ${lb}.cern.ch --keytab ~/${lb}.keytab --dont-expire-password --dont-update-dnshostname --base OU=Computers --verbose --server ${cerndc}
msktutil update -s host --computer-name ${computer_name} --hostname ${lb}.cern.ch --keytab ~/${lb}.keytab --dont-expire-password --dont-update-dnshostname --base OU=Computers --verbose --server ${cerndc} --dont-change-password
klist -k ~/${lb}.keytab