kubernetes, autoscaling, aws

Kubernetes Cluster Autoscaling on AWS

Last update:

Make no mistake, running a production Kubernetes cluster is not that easy. And, unless you use cloud resources smart you will be spending a lot of money. You only want to use resources that are really needed. When you deploy Kubernetes cluster on AWS, you define min and max number of instances per autoscaling group. You want to watch Kubernetes cluster and on insufficient resources to scale up. Also to scale down when you have underutilized nodes. The piece of software that will help you with this is Cluster Autoscaler. In this post, I will show you how to use and configure Cluster Autoscaler on AWS.

Enter the Cluster Autoscaler

Kubernetes cluster is infrastructure as API, which means doing automation tasks is easy. There are a lot of plugins available to manage resources on Kubernetes and it is easy to build your own. Installation of plugins is the same as installing any other piece of software. The only difference is that services like Cluster Autoscaler need to talk to Kubernetes API. This means that if you have (you definitely should) RBAC enabled, you need to define cluster roles and service account. My preferred way of installing applications on Kubernetes is with Helm, so installation will be pretty straightforward.

Before installation, you need to do some preparation steps. First, edit your instance groups to add extra labels. If your cluster is deployed with kops you can add labels by editing instance group:

⚡ kops edit ig nodes

    k8s.io/cluster-autoscaler/enabled: ""
    k8s.io/cluster-autoscaler/node-template/label: ""
    kubernetes.io/cluster/<CLUSTER_NAME>: owned

Cluster Autoscaler has the ability to auto-discover instance groups based on cluster name which I recommend using, especially if you have multiple instance groups. With auto-discovery, you don't need to set instance min and max size in two places and you don't need to change autoscaler config if you add an additional group later. Add additional IAM policy rules for nodes:

⚡ kops edit cluster

    node: |
          "Effect": "Allow",
          "Action": [
          "Resource": ["*"]

And apply configuration:

⚡ kops update cluster --yes

Now you can install Cluster Autoscaler. But, you need to know a few configuration tips. First, use the right version of Cluster Autoscaler depending on Kubernetes version you have. In my example, I'm running Kubernetes v1.9 and I will use Cluster Autoscaler v1.1.2.

You will deploy Cluster Autoscaler on master nodes and in kube-system namespace. Also, you will need to choose the right AWS region.

There are a few ways how the Cluster Autoscaler will react and which instance group it will pick up for scaling. If you have only one instance group this is trivial and you don't need to care about it. With expanders, you can select which instance group to scale. Currently, Cluster Autoscaler has 4 expanders:

  • random - selects the instance group randomly
  • most-pods - selects the instance group that will schedule the most amount of pods.
  • least-waste - selects the instance group that will waste the least amount of CPU/Memory resources
  • price - selects the instance group based on price

The default is random expander. The price expander would be really nice when you want run spot instances, but it doesn't work on AWS at the moment. You can track this pull request to get notified once it is ready.

One more interesting feature is to scale similar node groups. This is useful when you have instance groups in more AWS availability zones and you want to have the same number of instances in both groups.

Also, if you have a Prometheus running you can monitor Cluster Autoscaler. Prometheus metrics are by default exposed under /metrics.

Install Cluster Autoscaler with Helm. Adjust the values for your environment:

⚡ helm install --name autoscaler \
    --namespace kube-system \
    --set image.tag=v1.1.2 \
    --set autoDiscovery.clusterName=k8s.test.akomljen.com \
    --set extraArgs.balance-similar-node-groups=false \
    --set extraArgs.expander=random \
    --set rbac.create=true \
    --set rbac.pspEnabled=true \
    --set awsRegion=eu-west-1 \
    --set nodeSelector."node-role\.kubernetes\.io/master"="" \
    --set tolerations[0].effect=NoSchedule \
    --set tolerations[0].key=node-role.kubernetes.io/master \
    --set cloudProvider=aws \

⚡ kubectl get pods -l "app=aws-cluster-autoscaler" -n kube-system
NAME                                                 READY     STATUS    RESTARTS   AGE
autoscaler-aws-cluster-autoscaler-85f6fc55f7-tmq7w   1/1       Running   0          1h

The Cluster Autoscaler should be running on the master node. You can troubleshoot it like any other pod in the cluster if something doesn't work.


Having the Cluster Autoscaler is the first step in automating Kubernetes cluster to work efficiently. But, if you have multiple instance groups in more availability zones, combined with spot and on-demand instances, Cluster Autoscaler is not enough. You might take a look at additional tools that will manage your cluster in an efficient way, like k8s-spot-rescheduler, descheduler, etc. Stay tuned for the next one.