Load Balancing
Warning
This documentation is deprecated, please check here for its new home
Kubernetes services can be exposed externally by an Ingress resource or a Kubernetes service of type LoadBalancer
.
An Ingress controller is an in-cluster resource whereas a Kubernetes service of type LoadBalancer
exposes services through an external cloud
load balancer. Ingress covers a wide range of use-cases as it exposes services via HTTP and HTTPS routes, however there are cases when users want to expose their services which rely on protocols other than HTTP/HTTPS or expose services on arbitary ports. In these cases a kubernetes service of type LoadBalancer
could be used.
Service Type Load Balancers
When creating a Kubernetes Service, users have the option to
expose their service to external networks by setting the type
field to LoadBalancer
. This provisions an external cloud
load balancer for the Kubernetes Service.
Kubernetes Service Type Loadbalancer section contains details for creation of a service type load balancer in CERN's cloud.
Ingress Controller
By default the Ingress Controller is Traefik, this is usually all you need.
In some cases you might need Nginx, examples of this are tcp based forwarding or SSL passthrough. To enable Nginx instead of Traefik pass the following labels on cluster creation:
$ openstack coe cluster create <name>
--cluster-template <cluster-template> \
--merge-labels \
--labels ingress_controller=nginx
Cluster Setup
After cluster creation you need to set the nodes where the ingress controller will be running. Choose more than one to cover for failures.
Do this by labelling those nodes with role=ingress:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
mycluster-7xdvoefvuz2l-minion-0 Ready <none> 1d v1.11.2
mycluster-7xdvoefvuz2l-minion-1 Ready <none> 1d v1.11.2
mycluster-7xdvoefvuz2l-minion-2 Ready <none> 1d v1.11.2
$ kubectl label node mycluster-7xdvoefvuz2l-minion-0 role=ingress
$ kubectl label node mycluster-7xdvoefvuz2l-minion-1 role=ingress
You should see one instance of the ingress controller on each of the labeled nodes (it is defined as a DaemonSet).
Simple HTTP Ingress
An Ingress exposes a given Service outside the cluster.
We will use a simple Deployment and Service for our examples - based on a nginx backend.
$ cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-loadbalance
spec:
replicas: 5
selector:
matchLabels:
app: nginx-loadbalance
template:
metadata:
labels:
app: nginx-loadbalance
spec:
containers:
- name: nginx
image: nginx:1.15.5
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 80
- name: https
protocol: TCP
port: 8443
targetPort: 443
selector:
app: nginx-loadbalance
$ kubectl create -f nginx-deployment.yaml
Our service exposes ports 8080 (plain http) and 8443 (https).
Below we define our Ingress resource, exposing the plain http port of the Service. Check here for more details on the kubernetes Ingress resource.
$ cat nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.entryPoints: "http"
spec:
rules:
- host: myclusterdns.cern.ch
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 8080
$ kubectl create -f nginx-ingress.yaml
We defined the ingress based on the host, and need to add the corresponding DNS aliases in LanDB - in this case myclusterdns. This must be done on all the nodes we've marked with role=ingress above.
$ openstack server set --property landb-alias=myclusterdns--load-1- mycluster-7xdvoefvuz2l-minion-0
$ openstack server set --property landb-alias=myclusterdns--load-2- mycluster-7xdvoefvuz2l-minion-1
You need to wait for the DNS refresh to happen (~15min). After that the nginx default page should now be reachable at myclusterdns.cern.ch.
Ingress HTTP with SSL Termination
A common setup is to enable and terminate TLS at the Ingress.
For this purpose we will start by creating a test certificate - for a production service you will rely on a real certificate.
$ openssl req -newkey rsa:2048 -x509 -sha256 -days 365 -nodes -out tls.crt -keyout tls.key -subj "/CN=myclusterdns.cern.ch/emailAddress=your.email@cern.ch"
Then store the certificate as a secret in the cluster.
$ kubectl create secret tls mycluster-tls-cert --key=tls.key --cert=tls.crt
Optionally you can also configure a HTTP to HTTPS redirection.
The Ingress resource is only slightly different from above - note the new annotation specifying http, https, redirect to https and the tls entry in the spec:
$ cat nginx-ingress-tls.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress-tls
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
tls:
- hosts:
- myclusterdns.cern.ch
secretName: mycluster-tls-cert
rules:
- host: myclusterdns.cern.ch
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 8080
kubectl create -f nginx-ingress-tls.yaml
If you created the non-TLS Ingress resource you should delete that before creating the TLS resource.
Check here for other annotations that might be of interest. These should allow you to configure ssl redirection, sticky sessions, custom headers, among other things.
Ingress HTTP Passing TLS Certificate
In some cases even if doing SSL termination at the Ingress you will want to pass the user certificate information to your backend.
To achieve this you need a small addition to the SSL termination example above, in the form of an annotation to traefik (the rest of the Ingress resource remains the same):
...
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: https
traefik.ingress.kubernetes.io/pass-client-tls-cert: |
pem: true
infos:
notafter: true
notbefore: true
sans: true
subject:
country: true
province: true
locality: true
organization: true
commonname: true
serialnumber: true
...
Ingress SSL Passthrough
This is only available with the nginx ingress controller, check above on how to configure it instead of Traefik.
Here's an example Ingress - note the specific nginx annotations in this case:
Check here for other annotations that might be of interest.
$ cat nginx-ingress-ssl-passthrough.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress-ssl-passthrough
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
tls:
- hosts:
- myclusterdns.cern.ch
rules:
- host: myclusterdns.cern.ch
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 443
kubectl create -f nginx-ingress-ssl-passthrough.yaml
Expose TCP ports with ingress
Traefik and nginx-ingress, are configured to expose only ports 80 and 443. It is possible to expose arbitrary TCP ports with nginx-ingress. In the following example we will expose port 9000.
Update nginx-ingress installed in magnum tiller
In your magnum cluster you may have already nginx-ingress running by using --labels ingress_controller=nginx
.
You can modify the existing ingress-controller and expose TCP ports.
First, you need to configure your helm client.
kubernetes < 1.22.3-3
$ # create a helm_home in a working directory and export HELM_HOME:
$ mkdir -p ${HOME}/ws/helm_home
$ export HELM_HOME="${HOME}/ws/helm_home"
$ export HELM_TLS_ENABLE="true"
$ export TILLER_NAMESPACE="magnum-tiller"
$ kubectl -n magnum-tiller get secret helm-client-secret -o jsonpath='{.data.ca\.pem}' | base64 --decode > "${HELM_HOME}/ca.pem"
$ kubectl -n magnum-tiller get secret helm-client-secret -o jsonpath='{.data.key\.pem}' | base64 --decode > "${HELM_HOME}/key.pem"
$ kubectl -n magnum-tiller get secret helm-client-secret -o jsonpath='{.data.cert\.pem}' | base64 --decode > "${HELM_HOME}/cert.pem"
$
$ helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
metrics-server 1 Wed Aug 28 10:24:11 2019 DEPLOYED metrics-server-2.1.0 0.3.1 kube-system
nginx-ingress 1 Wed Aug 28 10:25:41 2019 DEPLOYED nginx-ingress-1.4.0 0.23.0 kube-system
$ helm init -c --stable-repo-url=https://charts.helm.sh/stable
Since you have your client configured you can retrieve the current values of nginx-ingress and upgrade the release, passing your values, namespace and chart version.
$ helm get values nginx-ingress > values.yaml
$ vim values.yaml # add the TCP ports you want in the tcp section
$ tail -5 values.yaml
tcp:
9000: "default/example-service:9000"
#<ingress-port>: "<namespace>/<server-name>:<service-port>"
udp: {}
$ helm upgrade nginx-ingress stable/nginx-ingress --version=v1.4.0 --namespace=kube-system -f values.yaml --recreate-pods
Note: In this example v1.4.0 is installed in the cluster. You should use the appropriate version for your cluster.
kubernetes >= 1.22.3-3
Add the chart repository for cern-magnum.
$ helm repo add releases https://registry.cern.ch/chartrepo/releases
Find the cern-magnum release.
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cern-magnum kube-system 1 2022-03-03 10:48:47.893725488 +0000 UTC deployed cern-magnum-0.10.2
Get the current values and write the additional values to expose arbitrary TCP or UDP ports.
$ helm -n kube-system get values cern-magnum -o yaml > cern-magnum.yaml
$ cat nginx-ingress.yaml
ingress-nginx:
enabled: true
tcp:
9000: "default/httpd:9000"
#<ingress-port>: "<namespace>/<service-name>:<service-port>"
Upgrade the release using the same chart version.
$ helm -n kube-system upgrade cern-magnum releases/cern-magnum --version v0.10.2 -f cern-magnum.yaml -f nginx-ingress.yaml