Auto Scaling is only available for Kubernetes right now
The Kubernetes Cluster Autoscaler observes the resource requests made by pods in the cluster, and:
- Add nodes if pods are stuck in the pending state due to lack of CPU or Memory.
- Remove nodes which have no pods running on them.
- Rebalance the pods in the cluster to improve overall resource usage.
Autoscaling is not enabled by default on the cluster and requires one label to be specified during cluster creation:
$ openstack coe cluster create <name> --cluster-template <cluster-template> \ --node-count 4 --merge-labels \ --labels auto_scaling_enabled=true
The minimum and maximum node count default to 1 and --node-count respectively. Alternatively the minimum and/or maximum number of nodes can be set manually:
--labels min_node_count=3 --labels max_node_count=7
That's it! If you validate your CA pod logs it should look something like this:
$ kubectl -n kube-system log -l app=cluster-autoscaler I0621 09:31:00.801171 1 leaderelection.go:217] attempting to acquire leader lease kube-system/cluster-autoscaler... I0621 09:31:00.877710 1 leaderelection.go:227] successfully acquired lease kube-system/cluster-autoscaler I0621 09:31:02.962222 1 magnum_manager_heat.go:293] For stack ID 366c1341-7af9-46e5-9c5f-86c107d6f0b1, stack name is dtomasgu-ca-xckx7eo7kk3x I0621 09:31:03.254374 1 magnum_manager_heat.go:310] Found nested kube_minions stack: name dtomasgu-ca-xckx7eo7kk3x-kube_minions-gxpdgkj4tzzn, ID 49d3e4a3-555f-4781-91c0-18f67c6cfdb0
The autoscaler is not aware of the resources available on OpenStack and so max_node_count should be set so that the cluster's resources will not exceed the quota limits of the OpenStack project.
The autoscaler will try to migrate pods if nodes are underutilised. By default, nodes with less than 50% utilisation are eligible for pod eviction if these pods fit on other nodes. To prevent pods from being evicted use the following annotation on your pods:
yaml annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
To prevent a specific node from being removed even if it is empty, use the following annotation on the node:
yaml annotations: cluster-autoscaler.kubernetes.io/scale-down-disabled: "true"
Pod priority is also considered by the autoscaler. Pods with a priority below a threshold (default -10) do not trigger a scale up. These low priority pods will also be ignored when the autoscaler considers nodes for removal. I.e. a node that only has pods below the priority threshold will be considered empty.
The Cluster Autoscaler is highly configurable. The responsiveness can be tuned by changing the parameters new-pod-scale-up-delay and scale-down-unneeded-time. To edit the Cluster Autoscaler deployment do:
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
and add or modify your arguments under spec.template.spec.containers.command. A list of common arguments is given below, but you can check all the available Cluster Autoscaler arguments in the upstream FAQ.
Common CA arguments:
- scan-interval: How often cluster is reevaluated for scale up or down.
- max-graceful-termination-sec: Maximum number of seconds CA waits for pod termination when trying to scale down a node.
- new-pod-scale-up-delay: Pods less than this old will not be considered for scale-up, default 0 seconds
- scale-down-delay-after-add: How long after scale up that scale down evaluation resumes
- scale-down-unneeded-time: How long a node should be unneeded before it is eligible for scale down
- scale-down-utilization-threshold: Node utilization level, below which a node can be considered for scale down, default 0.5 (50%)
- expendable-pods-priority-cutoff: Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they don't cause scale up.