kubernetes, aws, security, iam

Integrating AWS IAM and Kubernetes with kube2iam

Last update:

Containers deployed on top of Kubernetes sometimes requires easy access to AWS services. You have a few options to configure this. Most common is providing AWS access credentials to a particular pod or updating existing worker nodes IAM role with additional access rules. Pods in the AWS environment, by default, have the same access rules as underlying nodes. However, both solutions are a terrible practice, because there are projects that resolve this issue more elegantly. Two most popular are kube2iam and KIAM. They are pretty similar, but let's focus on kube2iam in this post.

The Problem and a Solution

I usually go ahead with installation and configuration, but you should understand AWS IAM and the problem in the environments like Kubernetes where containers are sharing the underlying nodes. I am noting a few sentences from the official kube2iam readme.

Traditionally in AWS, service level isolation is done using IAM roles. IAM roles are attributed through instance profiles and are accessible by services through the transparent usage by the aws-sdk of the ec2 metadata API. When using the aws-sdk, a call is made to the EC2 metadata API which provides temporary credentials that are then used to make calls to the AWS service.

The problem with this approach is that you cannot isolate a particular container for access to some AWS service with IAM roles - shared nodes.

The solution is to redirect the traffic that is going to the ec2 metadata API for docker containers to a container running on each instance, make a call to the AWS API to retrieve temporary credentials and return these to the caller. Other calls will be proxied to the EC2 metadata API. This container will need to run with host networking enabled so that it can call the EC2 metadata API itself.

Installation and Configuration

Tools that you need to follow this guide are helm for installation and AWS CLI for interacting with AWS. First, gather some info about your cluster to be able to configure kube2iam pods. For EKS based clusters use eni+ as interface name. You can find more interfaces based on your CNI provider here. Also, to get Amazon Resource Name (ARN) from instance profiles, you can use this command:

$ aws iam list-instance-profiles | jq -r '.InstanceProfiles[].Roles[].Arn'

With output like this arn:aws:iam::1234567890:role/test-worker-nodes-NodeInstanceRole-1W9NK0A56SMQ6, the first part is base role ARN arn:aws:iam::1234567890:role/ and the second part is node instance role name test-worker-nodes-NodeInstanceRole-1W9NK0A56SMQ6. You will need those below.

Here is the finalized config and installation command:

$ cat > values-kube2iam.yaml <<EOF
extraArgs:
  base-role-arn: arn:aws:iam::1234567890:role/
  default-role: kube2iam-default

host:
  iptables: true
  interface: "eni+"

rbac:
  create: true
EOF

$ helm install --name iam \
    --namespace kube-system \
    -f values-kube2iam.yaml \
    stable/kube2iam

NOTE: iptables rules prevent containers from having direct access to EC2 metadata API. Please read this part carefully to understand what is happening in the background.

Kube2iam works by intercepting traffic from the containers to the EC2 Metadata API, calling the AWS Security Token Service (STS) API to obtain temporary credentials using the pod configured role, then using these temporary credentials to perform the original request.

You have to create a policy file to add permissions for AWS STS to assume roles on worker nodes:

$ cat > kube2iam-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "sts:AssumeRole"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:iam::1234567890:role/k8s-*"
      ]
    }
  ]
}
EOF

$ aws iam put-role-policy \
    --role-name test-worker-nodes-NodeInstanceRole-1W9NK0A56SMQ6 \
    --policy-name kube2iam \
    --policy-document file://kube2iam-policy.json

NOTE: When you create the roles that the pods can assume, they need to start with k8s-, and that is why I put a wildcard in the above policy.

If everything works as expected, curl command from a new pod to a metadata API, should return kube2iam:

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
kube2iam

Real World Examples

Let's see how to use kube2iam to give Cert Manager pods access to Route53 to manage records. DNS cluster issuer needs access to Route53 for DNS records validation.

First, you need to define the trust policy of the role to allow kube2iam (via the worker node IAM Instance Profile Role) to assume the pod role:

$ cat > node-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::1234567890:role/test-worker-nodes-NodeInstanceRole-1W9NK0A56SMQ6"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

$ aws iam create-role \
    --role-name k8s-cert-manager \
    --assume-role-policy-document \
    file://node-trust-policy.json

Then define and attach Route53 policy to the above role name:

$ cat > route53-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "route53:GetChange",
      "Resource": "arn:aws:route53:::change/*"
    },
    {
      "Effect": "Allow",
      "Action": "route53:ChangeResourceRecordSets",
      "Resource": "arn:aws:route53:::hostedzone/*"
    },
    {
      "Effect": "Allow",
      "Action": "route53:ListHostedZonesByName",
      "Resource": "*"
    }
  ]
}
EOF

$ aws iam put-role-policy \
    --role-name k8s-cert-manager \
    --policy-name route53 \
    --policy-document file://route53-policy.json

If you want to add some other services and to use kube2iam, reuse the existing node trust policy file to define a new role. For example, if you want to deploy Cluster Autoscaler:

$ aws iam create-role \
    --role-name k8s-cluster-autoscaler \
    --assume-role-policy-document \
    file://node-trust-policy.json

Then define a new policy and attach it to k8s-cluster-autoscaler role.

The last step is to configure pods to use particular role name by providing annotation iam.amazonaws.com/role: k8s-cert-manager or iam.amazonaws.com/role: k8s-cluster-autoscaler, as defined in those examples.

Another useful feature of kube2iam is namespace restrictions, but I'm sure you would figure it out after reading this post.

Summary

When I was working with Kubernetes and AWS IAM roles for the first time, I spent more time than planned to figure it out. Maybe lack of AWS IAM knowledge, but I hope that this guide will help you to get started easier. I also recommend trying KIAM before deciding which solution works best for you.