Skip to content

Load Balancing

Load Balancing as a Service (LBaaS) offers load balancing relying on virtual IPs - as an alternative to DNS based load balancing.

Availability

All shared projects have access to the service but have a default quota of 0.

There is no quota for personal projects for load balancing.

Quota can be requested through the standard form in the Network section.

When quota requests are approved, users will notice a new region named sdn1 in horizon.

sdn1 region is only used for creating load balancers. Please switch to cern region for accessing the standard horizon functionalities.

Similarly, when using command line users have to set enviornment variable OS_REGION_NAME=sdn1 for managing load balancers. The OS_REGION_NAME should be set to cern for accessing previously created VMs, magnum clusters etc.

Concepts

  • Load Balancer: load balancer instance occupies a neutron network port and has an IP address assigned from a subnet.
  • Listener: load balancers can listen for requests on multiple ports. Each one of those ports is specified by a listener.
  • Pool: a pool holds a list of members that serve content through the load balancer. Pools are attached to listeners.
  • Member: a load balancer backend, member of a pool.
  • Health monitor: the health monitor keeps track of healthy members in a pool.
                                    +---------------+
                                    |               |
                                    | Load Balancer |
                                    | 137.138.6.18  |
                                    |               |
                                    +-------+-------+
                                            |
                              +-------------+--------------+
                              |                            |
                              |                            |
                       +------v-------+           +--------v-------+
                       |              |           |                |
                       |   Listener   |           |    Listener    |
                       | Port 80/HTTP |           | Port 443/HTTPS |
                       |              |           |                |
                       +------+-------+           +--------+-------+
                              |                            |
                              |                            |
                              |                            |
+-----------------+       +---v----+                   +---v----+      +-----------------+
|                 |       |        |                   |        |      |                 |
|  Health Monitor +-------+ Pool 1 |                   | Pool 2 +------+  Health Monitor |
|                 |       |        |                   |        |      |                 |
+-----------------+       +---+----+                   +---+----+      +-----------------+
                              |                            |
                              |                            |
                              |                            |
  +---------------------------v----------+       +---------v----------------------------+
  |                                      |       |                                      |
  | +--------+ +--------+     +--------+ |       | +--------+ +--------+     +--------+ |
  | |Member 1| |Member 2| --- |Member N| |       | |Member 1| |Member 2| --- |Member N| |
  | +--------+ +--------+     +--------+ |       | +--------+ +--------+     +--------+ |
  |                                      |       |                                      |
  +--------------------------------------+       +--------------------------------------+

In the example below we use:

  • mylb as the load balancer name
  • mylistener as the listener name
  • mypool as mylistener's default pool
  • myhealthmonitor as mypool's health monitor
  • 137.138.53.95 and 188.185.80.141 as the IPs of the backends (members)

Basic HTTP Load Balancer

The network id should always be public.

In this case we're setting up a basic HTTP based load balancer on port 80. We have to set the region name to sdn1 for now - you might want to unset this environment variable once you're done with creating loadbalancer resources.

export OS_REGION_NAME=sdn1

openstack loadbalancer create --name mylb --vip-network-id public

After creating a loadbalancer instance, we will test the network reachability by sending ICMP ping requests. In worst case, loadbalancer instance will respond 60 seconds after creation.

ping <loadbalancer-virtual-ip>

Next we create a loadbalancer listener for protocol HTTP and port 80.

openstack loadbalancer listener create --name mylistener \
                                       --protocol HTTP \
                                       --protocol-port 80 mylb

Next we create a pool to host the members, specifying the load balancing algorithm as ROUND_ROBIN. Supported options for load balancing algorithm are ROUND_ROBIN, SOURCE_IP and LEAST_CONNECTIONS.

openstack loadbalancer pool create --name mypool  \
                                   --lb-algorithm ROUND_ROBIN \
                                   --listener mylistener \
                                   --protocol HTTP

Next we add the loadbalancer members. The port is the port the backend is listening on, which may be different from the port of the listener above.

openstack loadbalancer member create --name server-1 --address 137.138.53.95 --protocol-port 80 mypool
openstack loadbalancer member create --name server-2 --address 188.185.80.141 --protocol-port 80 mypool

Next, we create an HTTP type health monitor, our back-end servers have been configured with a health check at the URL path /healthcheck. Supported types for health monitors are HTTP, HTTPS, and TCP. In the case of a TCP type health monitor, TCP service port for backend servers is periodically probed. Please note that the health monitor is an optional resource.

openstack loadbalancer healthmonitor create --name http-monitor \
                                            --delay 7 \
                                            --timeout 5 \
                                            --max-retries 3 \
                                            --url-path /healthcheck \
                                            --expected-codes 200,201 \
                                            --type HTTP mypool

TCP type healthmonitor can be created by using the following command.

openstack loadbalancer healthmonitor create --name tcp-monitor \
                                            --delay 7 \
                                            --max-retries 3 \
                                            --timeout 5 \
                                            --type TCP mypool

Finally, we can verify our loadbalancer by sending requests to virtual IP.

curl http://<loadbalancer-virtual-IP>

Some response sent by backend

TCP Load Balancer

This is generally suitable when load balancing a non-HTTP TCP-based service. The following example creates a load balancer for ssh connections. Note that the load balancer listens for ssh connections on port 5555 and backend servers are using port 22 (you can use other ports as appropriate).

openstack loadbalancer create --name lb --vip-network-id public
openstack loadbalancer listener create --name tcp-listener --protocol TCP --protocol-port 5555 lb
openstack loadbalancer pool create --name tcp-pool --lb-algorithm ROUND_ROBIN --listener tcp-listener --protocol TCP --session-persistence type=SOURCE_IP
openstack loadbalancer member create --name server-1 --address 137.138.53.95 --protocol-port 22 tcp-pool
openstack loadbalancer member create --name server-2 --address 188.185.80.141 --protocol-port 22 tcp-pool

#137.138.6.16 is load balancer's IP address
$ ssh root@137.138.6.16 -p 5555
Last login: Mon May 25 10:55:56 2020 from lbaas-69e19c65-6d30-48f6-a3e3-04ffe7442a54.cern.ch
[root@delete-me ~]# hostname -i 
188.185.80.141

TLS termination

This section describes the steps to create a load balancer to serve TLS terminated traffic.

First of all we have to generate a test certificate.

openssl req -newkey rsa:2048 -x509 -sha256 -days 365 -nodes \
            -out tls.crt -keyout tls.key \
            -subj "/CN=test-lb.cern.ch/emailAddress=hamza.zafar@cern.ch"

Next we have to store the certificate in Barbican (OpenStack's secret store)

openstack secret store --payload-content-type='text/plain' \
                       --name='certificate' \
                       --payload="$(cat tls.crt)"
+---------------+--------------------------------------------------------------------------------+
| Field         | Value                                                                          |
+---------------+--------------------------------------------------------------------------------+
| Secret href   | https://openstack.cern.ch:9311/v1/secrets/7454acfb-e086-4073-b9dc-ff075081513c |
| Name          | certificate                                                                    |
| Created       | None                                                                           |
| Status        | None                                                                           |
| Content types | {u'default': u'text/plain'}                                                    |
| Algorithm     | aes                                                                            |
| Bit length    | 256                                                                            |
| Secret type   | opaque                                                                         |
| Mode          | cbc                                                                            |
| Expiration    | None                                                                           |
+---------------+--------------------------------------------------------------------------------+

Next we have to store the private key in Barbican

openstack secret store --payload-content-type='text/plain' \
                       --name='private_key' \
                       --payload="$(cat tls.key)"
+---------------+--------------------------------------------------------------------------------+
| Field         | Value                                                                          |
+---------------+--------------------------------------------------------------------------------+
| Secret href   | https://openstack.cern.ch:9311/v1/secrets/098e21a1-200f-40f3-9382-b361f40b4275 |
| Name          | private_key                                                                    |
| Created       | None                                                                           |
| Status        | None                                                                           |
| Content types | {u'default': u'text/plain'}                                                    |
| Algorithm     | aes                                                                            |
| Bit length    | 256                                                                            |
| Secret type   | opaque                                                                         |
| Mode          | cbc                                                                            |
| Expiration    | None                                                                           |
+---------------+--------------------------------------------------------------------------------+

Next we have to create a secret container in Barbican. Secret hrefs can be found by executing openstack secret list

openstack secret container create --name='lb_tls_container' \
                                  --type='certificate' \
                                  --secret="certificate=https://openstack.cern.ch:9311/v1/secrets/7454acfb-e086-4073-b9dc-ff075081513c" \
                                  --secret="private_key=https://openstack.cern.ch:9311/v1/secrets/098e21a1-200f-40f3-9382-b361f40b4275"
+----------------+-----------------------------------------------------------------------------------+
| Field          | Value                                                                             |
+----------------+-----------------------------------------------------------------------------------+
| Container href | https://openstack.cern.ch:9311/v1/containers/5a2eef97-d1b0-487b-bae6-54a1432a79d8 |
| Name           | lb_tls_container                                                                  |
| Created        | None                                                                              |
| Status         | ACTIVE                                                                            |
| Type           | certificate                                                                       |
| Certificate    | https://openstack.cern.ch:9311/v1/secrets/7454acfb-e086-4073-b9dc-ff075081513c    |
| Intermediates  | None                                                                              |
| Private Key    | https://openstack.cern.ch:9311/v1/secrets/098e21a1-200f-40f3-9382-b361f40b4275    |
| PK Passphrase  | None                                                                              |
| Consumers      | None                                                                              |
+----------------+-----------------------------------------------------------------------------------+

The following commands create a TLS Terminated loadbalancer

export OS_REGION_NAME=sdn1

openstack loadbalancer create --name lb --vip-network-id public

openstack loadbalancer listener create --name https_listener \
                                       --protocol TERMINATED_HTTPS \
                                       --default-tls-container-ref https://openstack.cern.ch:9311/v1/containers/5a2eef97-d1b0-487b-bae6-54a1432a79d8 \
                                       --protocol-port 443 lb


openstack loadbalancer pool create --name pool \
                                   --lb-algorithm ROUND_ROBIN \
                                   --listener https_listener \
                                   --protocol HTTP

openstack loadbalancer member create --address 188.185.80.141 --protocol-port 80 pool
openstack loadbalancer member create --address 137.138.53.95 --protocol-port 80 pool

Note: HTTP protocol is specified for pool because backends(members) are serving HTTP content on port 80.

Last step is to verify ssl termination. loadbalancer's virtual IP can be found by executing openstack loadbalancer list

curl -k https://<loadbalancer's virtual ip>

Some response from backend server

Non-terminated HTTPS Loadbalancer - SSL Passthrough

To create a non-terminated HTTPS load balancer, the user has to create a listener and pool of protocol type HTTPS. The load balancer will then forward raw TCP traffic from the client to the backend servers. In this mode, the load balancer can't insert headers to indicate the client's IP address. Therefore for backend servers, traffic will appear to originate from the load balancer. Backend servers should be configured to terminate the HTTPS connections.

Kubernetes Service Type Loadbalancer

This section contains details for creating a service type loadbalancer for kubernetes. Kubernetes version 1.17 or greater is required.

Kubernetes cluster template kubernetes-1.17.5-1 is used in this example.

First step is to create a pod, in this example we are creating an nginx pod serving content on port 80

$ cat nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
   app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

$ kubectl create -f nginx.yaml
pod/nginx created

Next step is to create a service type loadbalancer for our nginx pod. For clusters <=1.18 you need extra annotations - the value for the network-id annotation can be found by running the following command:

$ echo $(export OS_REGION_NAME=sdn1 && openstack network show public | awk '$2=="id" {print $4}')
798d00f3-2af9-48a0-a7c3-a26d909a2d64

$ cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginxservice
  annotations:
    # These annotations are only required for cluster templates <=1.18
    loadbalancer.openstack.org/network-id: "798d00f3-2af9-48a0-a7c3-a26d909a2d64"
    service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
    loadbalancer.openstack.org/cascade-delete: "false"
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: nginx
  type: LoadBalancer

$ kubectl create -f nginx-service.yaml
service/nginxservice created

Next step is to find the public IP address assigned to service type loadbalancer.

$ kubectl get svc --watch
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)        AGE
kubernetes     ClusterIP      10.254.0.1       <none>         443/TCP        33m
nginxservice   LoadBalancer   10.254.126.138   137.138.6.16   80:30741/TCP   45s

Final step is to verify that our loadbalancer is able to serve content from kubernetes pods.

$ curl 137.138.6.16
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Miscellaneous

Load balancer Statistics

Statistics for a load balancer can be viewed by using the following command.

$ openstack loadbalancer stats show mylb --all
+---------------------------+-------+
| Field                     | Value |
+---------------------------+-------+
| HTTP_1xx_responses        | 0     |
| HTTP_2xx_responses        | 9     |
| HTTP_3xx_responses        | 13    |
| HTTP_4xx_responses        | 4     |
| HTTP_5xx_responses        | 0     |
| HTTP_other_responses      | 0     |
| active_connections        | 0     |
| bytes_in                  | 7769  |
| bytes_out                 | 5439  |
| request_errors            | 4     |
| request_rate_max_recorded | 5     |
| request_rate_per_sec      | 0     |
| request_total             | 26    |
| session_limit             | 2000  |
| session_rate_limit        | 0     |
| session_rate_max_recorded | 3     |
| session_rate_per_sec      | 0     |
| total_connections         | 16    |
+---------------------------+-------+

Setting Load balancer Session Limit

Concurrent session limit for a load balancer can be set by using the following command. Default value is set to 2000.

openstack loadbalancer listener set --connection-limit=5000 <listener-name>

Session limit value can be verified by checking the load balancer statistics.

Setting Member Weights

The weight of a member determines the portion of requests or connections it services compared to the other members of the pool. By default all load balancer members handle equal share of requests.

Setting weight for a new member:

openstack loadbalancer member create --address 188.185.80.141 --weight 2 --protocol-port 80 pool

Updating weight for an existing member:

openstack loadbalancer member set --weight 2 <pool-name> <member-name>

Loadbalancer pool and member names can be found by executing the following commands:

openstack loadbalancer pool list

openstack loadbalancer member list <pool-name>

Enabling/Disabling Members

During some maintainance activities you might want to disable some members from serving requests. This feature can help in upgrading services with zero downtime. Following commands can be used to enable/disable loadbalancer members:

openstack loadbalancer member set --enable <pool-name> <member-name>
openstack loadbalancer member set --disable <pool-name> <member-name>

Session Persistence

Session persistence is a feature of the load balancing service. It attempts to force connections or requests in the same session to be processed by the same member as long as it is active. The OpenStack LBaaS service supports three types of persistence:

  • SOURCE_IP:

    With this persistence mode, all connections originating from the same source IP address, will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type SOURCE_IP:

    openstack loadbalancer pool create --name <pool-name> --lb-algorithm ROUND_ROBIN --listener <listener-id> --protocol HTTP --session-persistence type=SOURCE_IP

  • HTTP_COOKIE:

    With this persistence mode, the loadbalancer will create a cookie on the first request from a client. Subsequent requests containing the same cookie value will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type HTTP_COOKIE:

    openstack loadbalancer pool create --name <pool-name> --lb-algorithm ROUND_ROBIN --listener <listener-id> --protocol HTTP --session-persistence type=HTTP_COOKIE

  • APP_COOKIE:

    With this persistence mode, the loadbalancer will rely on a cookie established by the backend application. All requests carrying the same cookie value will be handled by the same member of the pool. Following command can be used to create a pool with session persistence of type APP_COOKIE:

    openstack loadbalancer pool create --name <pool-name> --lb-algorithm ROUND_ROBIN --listener <listener-id> --protocol HTTP --session-persistence type=APP_COOKIE,cookie_name=<cookie-name>

Backup members

Multiple members can be marked as backups, load balancing will be performed among all backup servers when all normal ones are unavailable. For this feature to work, a healthmonitor resource should be created for the load balancer.

A member can be marked/unmarked as a backup by using the following commands respectively:

openstack loadbalancer member set --enable-backup <pool-name> <member-name>
openstack loadbalancer member set --disable-backup <pool-name> <member-name>

Setting domain name for load balancer

Domain name can be set for a load balancer by updating its description field. Following command can be used to set domain name:

openstack loadbalancer set --description my-domain-name mylb

ping my-domain-name.cern.ch

Multiple dns aliases can be specified in the description field as shown below:

openstack loadbalancer set --description "my-domain-one,my-domain-two,my-domain-three" mylb

Let's say, if you want to remove my-domain-two, then update the description field as shown below:

openstack loadbalancer set --description "my-domain-one,my-domain-three" mylb

If you want to remove all dns aliases, then simply set the description field as an empty string

openstack loadbalancer set --description "" mylb

Please note that the domain name will be made available after 15 minutes in the worst case.

Setting security group rules for a load balancer

In this section, we will explore the feature of setting security group rules for load balancers. Please have a look at the upstream documentation for OpenStack Neutron Security Groups to get acquainted with concepts and common CLI commands for security groups.

Security group rules are applied to the neutron port of the load balancer. vip-port-id column in the output of openstack loadbalancer show command displays the neutron port ID of the load balancer.

$ export OS_REGION_NAME=sdn1

$ openstack loadbalancer show lb -c vip_port_id
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| vip_port_id | bfbf7d43-d954-4f47-abbf-80e4126b498a |
+-------------+--------------------------------------+

$ openstack port show bfbf7d43-d954-4f47-abbf-80e4126b498a -c security_group_ids
+--------------------+--------------------------------------+
| Field              | Value                                |
+--------------------+--------------------------------------+
| security_group_ids | 550af890-4d1b-4873-bfca-c0eb4d33542b |
+--------------------+--------------------------------------+

$ openstack security group show 550af890-4d1b-4873-bfca-c0eb4d33542b -c name -c id
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 550af890-4d1b-4873-bfca-c0eb4d33542b |
| name  | default                              |
+-------+--------------------------------------+

$ openstack security group rule list 550af890-4d1b-4873-bfca-c0eb4d33542b --long --sort-column Direction
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group                |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+
| 8d0de881-e2d3-4d0a-849a-9c33ed63637c | any         | IPv4      | 0.0.0.0/0 | 0:65535    | egress    | None                                 |
| 24226828-9107-492c-b4bc-be844d8f54a7 | any         | IPv6      | ::/0      | 0:65535    | egress    | None                                 |
| 89b47e80-1f8b-49ea-8fa1-4066a3d7f128 | any         | IPv4      | 0.0.0.0/0 | 0:65535    | ingress   | 550af890-4d1b-4873-bfca-c0eb4d33542b |
| ba851ad5-fa70-4464-85e0-2cc003a6212f | any         | IPv6      | ::/0      | 0:65535    | ingress   | 550af890-4d1b-4873-bfca-c0eb4d33542b |
| 2b16b3ba-576c-4649-8009-501a779f1640 | any         | IPv4      | 0.0.0.0/0 | 0:65535    | ingress   | None                                 |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+--------------------------------------+

The output above shows that the neutron port for the load balancer is using the default security group. Rules for the default security group shows that it allows network traffic of all kinds. The load balancer is listening for TCP connections on port 5555. The following output shows that we can open connections to the load balancer for port 5555.

$ nc -z -v 137.138.6.94 5555
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 137.138.6.94:5555.
Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.

We don't recommend changing rules in the default security group. Therefore we will create a new security group to apply firewall rules on our load balancer to block ICMP ping requests.

$ openstack security group create test-sec-group

# Egress rules are automatically created upon security group creation
$ openstack security group rule list test-sec-group --long
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+
| 7de25c18-db07-4f3a-b549-1e8b8ff1a58f | any         | IPv4      | 0.0.0.0/0 | 0:65535    | egress    | None                  |
| ebb5f9cf-ebd6-4467-aef5-09f379a08d07 | any         | IPv6      | ::/0      | 0:65535    | egress    | None                  |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+

# Remove the default security group from the load balancer's neutron port
$ openstack port set --no-security-group  bfbf7d43-d954-4f47-abbf-80e4126b498a

# Assign this new security group to the loadbalancer's neutron port
$ openstack port set --security-group test-sec-group bfbf7d43-d954-4f47-abbf-80e4126b498a

The new security group doesn't contain any rules for ingress. Therefore, TCP connections to port 5555 will drop. Let's add a rule to allow TCP connections on port 5555.

$ nc -z -v 137.138.6.94 5555
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.

$ openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 5555 test-sec-group

$ nc -z -v 137.138.6.94 5555
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 137.138.6.94:5555.
Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.

We can also add a rule to allow ICMP ping requests.

$ ping -c 1 137.138.6.94
PING 137.138.6.94 (137.138.6.94) 56(84) bytes of data.

--- 137.138.6.94 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

# allow ICMP ingress traffic 
$ openstack security group rule create --ingress --remote-ip 0.0.0.0/0 --protocol icmp test-sec-group

$ ping -c 1 137.138.6.94
PING 137.138.6.94 (137.138.6.94) 56(84) bytes of data.
64 bytes from 137.138.6.94: icmp_seq=1 ttl=62 time=0.559 ms

Firewall rules for allowing ICMP ping requests and TCP connections for port 5555 can be seen below:

$ openstack security group rule list test-sec-group --long
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+
| ID                                   | IP Protocol | Ethertype | IP Range  | Port Range | Direction | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+
| 7de25c18-db07-4f3a-b549-1e8b8ff1a58f | any         | IPv4      | 0.0.0.0/0 | 0:65535    | egress    | None                  |
| ebb5f9cf-ebd6-4467-aef5-09f379a08d07 | any         | IPv6      | ::/0      | 0:65535    | egress    | None                  |
| 5e3cf2e6-0eba-4a4c-973b-2da4c3893939 | tcp         | IPv4      | 0.0.0.0/0 | 5555:5555  | ingress   | None                  |
| 31a1b938-e315-4b31-8702-8eb80f881c61 | icmp        | IPv4      | 0.0.0.0/0 |            | ingress   | None                  |
+--------------------------------------+-------------+-----------+-----------+------------+-----------+-----------------------+

Known Issues

  • Remove 0.0.0.0/0 Egress Rule: We have noticed that in some cases the security group rules are not applied instantly to the load balancer. A workaround is to remove the 0.0.0.0/0 egress rule and create a new egress rule with IP range that covers your load balancer's backend members. If your load balancer members are spread accross multiple subnets, then you can create one /32 egress rule per load balancer member.
# remove 0.0.0.0/0 egress rule
$ openstack security group rule delete 7de25c18-db07-4f3a-b549-1e8b8ff1a58f

# add egress rule, 188.185.86.0/24 is the subnet for load balancer's members
$ openstack security group rule create --egress --remote-ip 188.185.86.0/24 custom-sec-group

Another reason for removing the 0.0.0.0/0 egress rule is to prevent your backend members from taking part in source IP address spoofing attacks. For more details you can refer to BCP38

  • Load balancer won't have connectivity if you move the load balancer's port back to the default security group. You should stick with the new security group and create rules to control the network traffic.

Deleting a load balancer

Load balancer resources should be deleted in the following order:

  • Members: $ openstack loadbalancer member delete <pool-id> <member-id>
  • Health monitor: $ openstack loadbalancer healthmonitor delete <healthmonitor-id>
  • Pool: $ openstack loadbalancer pool delete <pool-id>
  • Listener: $ openstack loadbalancer listener delete <listener-id>
  • Loadbalancer: $ openstack loadbalancer delete <loadbalancer-id>

You can find the IDs of load balancer resources by using the following commands:

# Filter pool by loadbalancer ID
$ openstack loadbalancer pool list -c id --loadbalancer <loadbalancer-id>
+--------------------------------------+
| id                                   |
+--------------------------------------+
| f88d4fd3-fcca-4d5d-8b0b-a668005c8544 |
+--------------------------------------+

$ openstack loadbalancer pool show f88d4fd3-fcca-4d5d-8b0b-a668005c8544 -c members -c healthmonitor_id -c loadbalancers -c listeners
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| healthmonitor_id | 92ae362d-4e48-400d-86d5-a07b8a1f8baf |
| listeners        | 1b9d7f51-9fb1-4c92-8c42-14628bc44475 |
| loadbalancers    | 4266ee4d-8e79-424b-8bab-a0201936f3cd |
| members          | f60b1335-d235-4999-92d3-6a13b0eaa338 |
|                  | a5aae9f5-297d-4485-8ae3-ba0b6df2ecdb |
|                  | 3f09d9d7-189e-4fb5-a986-0d4cd7d14182 |
+------------------+--------------------------------------+