Last update:
Running a production Kubernetes cluster is not that easy. Also, unless you use cloud resources smart, you will be spending much money. You only want to use resources that are needed. When you deploy Kubernetes cluster on AWS, you define min and max number of instances per autoscaling group. You want to watch Kubernetes cluster and on insufficient resources to scale up. Also to scale down when you have underutilized nodes. The piece of software that will help you with this is Cluster Autoscaler or CA. Let's see how to use and configure CA on AWS.
Enter the Cluster Autoscaler
Kubernetes cluster is infrastructure as API, which means doing automation tasks is easy. There are a lot of plugins available to manage resources on Kubernetes, and it is easy to build your own. Installation of plugins is the same as installing any other piece of software. The only difference is that services like CA need to talk to Kubernetes API. It means that if you have (you definitely should) RBAC enabled, you need to define cluster roles and service account. My preferred way of installing applications on Kubernetes is with Helm, so installation is pretty straightforward.
Before installation, you need to do some preparation steps. First, edit your instance groups to add extra labels. If your cluster is deployed with kops you can add labels by editing instance group:
⚡ kops edit ig nodes
spec:
cloudLabels:
k8s.io/cluster-autoscaler/enabled: ""
k8s.io/cluster-autoscaler/node-template/label: ""
kubernetes.io/cluster/<CLUSTER_NAME>: owned
CA can auto-discover instance groups based on cluster name which I recommend using, especially if you have multiple instance groups. With auto-discovery, you don't need to set instance min and max size in two places, and you don't need to change CA config if you add group later. Add additional IAM policy rules for nodes:
⚡ kops edit cluster
spec:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:SetDesiredCapacity",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": ["*"]
}
]
NOTE: Instead of adding the above IAM policy to all nodes, a much better idea would be to apply this policy only to CA pods. Check this post for more info - Integrating AWS IAM and Kubernetes with kube2iam.
Apply configuration:
⚡ kops update cluster --yes
Now you can install CA. However, you need to know a few configuration tips. First, use the right version of CA depending on Kubernetes version you have. In my example, I'm running Kubernetes v1.9
and CA v1.1.2
.
You can deploy CA on master nodes and in kube-system
namespace. Also, you need to choose the right AWS region.
There are a few ways how the CA reacts and which instance group it picks up for scaling. If you have only one instance group, this is trivial, and you don't need to care about it. With expanders, you can select which instance group to scale. Currently, CA has 4 expanders:
- random - selects the instance group randomly
- most-pods - selects the instance group that schedules the most amount of pods.
- least-waste - selects the instance group that waste the least amount of CPU/Memory resources
- price - selects the instance group based on price
The default is random expander. The price expander would be helpful when you want to run spot instances, but it doesn't work on AWS at the moment. You can track this pull request to get notified once it is ready.
One more exciting feature is to scale similar node groups. This is useful when you have instance groups in more AWS availability zones, and you want to have the same number of instances in both groups.
Also, if you have a Prometheus running CA metrics are exposed under /metrics
.
Install CA with Helm and adjust the values for your environment:
⚡ helm install --name autoscaler \
--namespace kube-system \
--set image.tag=v1.1.2 \
--set autoDiscovery.clusterName=k8s.test.akomljen.com \
--set extraArgs.balance-similar-node-groups=false \
--set extraArgs.expander=random \
--set rbac.create=true \
--set rbac.pspEnabled=true \
--set awsRegion=eu-west-1 \
--set nodeSelector."node-role\.kubernetes\.io/master"="" \
--set tolerations[0].effect=NoSchedule \
--set tolerations[0].key=node-role.kubernetes.io/master \
--set cloudProvider=aws \
stable/cluster-autoscaler
⚡ kubectl get pods -l "app=aws-cluster-autoscaler" -n kube-system
NAME READY STATUS RESTARTS AGE
autoscaler-aws-cluster-autoscaler-85f6fc55f7-tmq7w 1/1 Running 0 1h
The CA should be running on the master node. You can troubleshoot it like any other pod in the cluster if something doesn't work.
Summary
Having the CA is the first step in automating Kubernetes cluster to work efficiently. However, if you have multiple instance groups in more availability zones, combined with spot and on-demand instances, CA is not enough. You might take a look at additional tools that can efficiently manage your cluster. Check my post about Kubernetes add-ons for more efficient computing to learn more. Stay tuned for the next one.