Last update:
This article is originally posted on the Faire’s technical blog - The Craft.
Major cloud providers like Amazon are betting on custom-built ARM processors. Amazon built the first version of the Graviton processor in 2018. Two years later, they introduced a new version, Graviton2, with some significant improvements and a 40% better price/performance over comparable x86-based instances. Those are big numbers.
Also, you probably heard about Apple's M1 ARM-based SoC and how good it is. Soon, likely all Mac lineups will be powered by ARM. There is a really interesting post on why M1 is superior and different from traditional CPUs, which I highly recommend you to read if you want to learn more.
At Faire, we are exploring Amazon's Graviton2 based instances to run Java/Kotlin apps on the Kubernetes platform for some gains in performance and lower prices. Running Kubernetes on ARM instances and building ARM containers sounds like a significant change. However, it is not that complicated because Kubernetes and docker are built with multiple architectures in mind. Let’s see how it works.
Kubernetes ARM Nodes on AWS
Our Kubernetes cluster is running on AWS EKS. The setup can have small differences for other cloud providers or standalone installations, but it should be similar. It doesn't matter which platform your master nodes are running, as you will not run any of your apps on those.
Before you can utilize Amazon ARM instances for Kubernetes worker nodes, there are a few preparation steps. The first step would be to add another node group of ARM instances. There is nothing special about it, except that you need to choose an ARM-based instance type, for example, M6g (g stands for Graviton2). It is pretty simple and officially supported in EKS, so please check the docs.
In a nutshell, ARM support means that you have to use ARM-based instance type, ARM OS, with all dependencies like docker, kubelet daemon, etc., built for ARM. When adding a new Kubernetes node group, I suggest that you taint and label those nodes to prevent running non-compatible containers.
We use the following services on each worker node, and in practice, this means that each one of those containers needs to be compatible with ARM architecture:
- AWS node (AWS VPC native CNI plugin)
- kube-proxy
- kube2iam (IAM authentication)
- Datadog agent
- Linkerd proxy
Only the kube2iam image wasn't available for the ARM platform when writing this post, and we had to build it. The process is straightforward so let's get into that.
Docker Multi Architecture Images
There is an architecture label for each docker image. You can check the architecture of the image with the docker image inspect
command, for example:
$ docker pull openjdk:15
$ docker image inspect openjdk:15 --format='{{.Architecture}}'
amd64
Docker pulled the amd64 image because it's running on an amd64 machine in this case. If you run the same command on the ARM platform, you would probably get arm64 as an architecture label. So, how is this possible, the image has the same tag?
One way to achieve this is by creating a docker manifest list for identical images in function available for different architectures. First, you can create two images with separate tags, e.g., openjdk:15-amd64
and openjdk:15-arm64
, and then create a manifest list combining those two images as openjdk:15
and push it to the registry. You will see the whole process with the Java app example below.
Docker Buildx Plugin
Buildx is a docker plugin that extends the docker build command with the full support of BuildKit. BuildKit library is bundled into docker daemon. One of many interesting features of BuildKit is that it is designed to work for building multi-platform images and not relying on underneath architecture and operating system.
If you are using Docker for Desktop on Mac, you can enable experimental features in preferences to use buildx. Make sure you are running Docker for Desktop version 3.0.0 and up. Older versions had experimental features in Edge builds only.
To verify that buildx is available on the system, run:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
default * docker
default default running linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
From the above output, you can see all supported platform versions.
On Linux nodes, the easiest way to build multi-architecture images is by utilizing QEMU, a well-known machine emulator and virtualizer. The setup is pretty straightforward as well, and here is a short version of it.
First, you have to install the latest buildx plugin:
$ wget https://github.com/docker/buildx/releases/download/v0.5.1/buildx-v0.5.1.linux-amd64
$ mkdir ~/.docker/cli-plugins
$ mv buildx-v0.5.1.linux-amd64 ~/.docker/cli-plugins/docker-buildx
$ chmod a+x ~/.docker/cli-plugins/docker-buildx
The command docker buildx
should work now. The next step is to install a cross-platform emulator collection distributed as a docker image:
$ docker run --privileged --rm tonistiigi/binfmt --install all
The above command should install and print out all supported emulators. Now let's start the new multi-platform builder instance:
$ docker buildx create --name multiplatform
$ docker buildx use multiplatform
Running docker buildx inspect --bootstrap
should show all supported platforms. For more details on this setup and how it works under the hood, please check buildx docs.
Once the buildx is ready, you can try to build an ARM kube2iam image with it. Kube2iam is written in Go, which means that you need to cross-compile the code for different platforms to produce a valid binary. This is true for many other programming languages as well. Let’s try it:
$ git clone https://github.com/jtblin/kube2iam.git
$ cd kube2iam
$ docker buildx build \
--platform linux/arm64 \
-t fairewholesale/kube2iam:0.10.11-arm64 .
The above build image process with buildx and arm64 platform automatically produce ARM binaries because Dockerfile for kube2iam has a build stage in it as well, and docker engine will automatically pull the ARM version of builder image golang:1.14.0
.
Java Apps on ARM
The whole point of this blog post was to see how you can run the Java apps. It turns out to be the easiest part after you figure out how to run ARM nodes and build docker images that can run on them.
You already saw how to build a kube2iam container for ARM, so you might be wondering how Java is different? For Java apps, the only thing you need to have is JVM binaries capable of running on the ARM platform. You don't need to cross-compile the code like in the previous example. When producing an ARM container that will run Java code, you can compile code on any platform, then copy the build artifact to the ARM platform and build the image there. What you will get, however, is a docker image capable of running on the ARM platform only.
Let's see this through a simple Java app and docker build process. Get the app and run it locally:
$ git clone https://github.com/Faire/javaosarch
$ cd javaosarch
$ gradle run
> Task :run
OS name : Mac OS X
OS arch : x86_64
BUILD SUCCESSFUL in 1s
2 actionable tasks: 2 executed
Assuming you also have the x86_64
platform above, now build the jar and create a docker image on the arm64 platform:
$ gradle build
BUILD SUCCESSFUL in 943ms
5 actionable tasks: 4 executed, 1 up-to-date
$ docker buildx build \
--platform linux/arm64 \
-t fairewholesale/javaosarch:1.0-arm64 .
$ docker run \
--platform linux/arm64 \
fairewholesale/javaosarch:1.0-arm64 \
java -cp javaosarch-1.0.jar javaosarch.JavaOsArch
OS name : Linux
OS arch : aarch64
From this example, you can see that it is not important on which platform you compile the Java code. The only thing that matters is that you build a container for ARM.
Now, let's build the amd64 version of the above image and create a manifest to combine both images under the same tag:
$ docker buildx build \
--platform linux/amd64 \
-t fairewholesale/javaosarch:1.0-amd64 .
$ docker push fairewholesale/javaosarch:1.0-arm64
$ docker push fairewholesale/javaosarch:1.0-amd64
$ docker manifest create \
fairewholesale/javaosarch:1.0 \
--amend fairewholesale/javaosarch:1.0-arm64 \
--amend fairewholesale/javaosarch:1.0-amd64
$ docker manifest push fairewholesale/javaosarch:1.0
You can also inspect the manifest, which will show the information about available image architectures:
$ docker manifest inspect fairewholesale/javaosarch:1.0 | jq .manifests[].platform
{
"architecture": "amd64",
"os": "linux"
}
{
"architecture": "arm64",
"os": "linux"
}
And that should be it, the fairewholesale/javaosarch:1.0 will work on both amd64 and arm64 machines.
When setting up a deployment for Kubernetes, you don’t need to take care of which image a particular node will pull. Docker engine will figure that out and pull the right image based on its running platform.
Some other important details you need to be aware of:
- If you are using natively-built libraries through JNI, you may need to get those libraries for the ARM platform.
- The official OpenJDK image starting from openjdk:15 is available for the ARM platform. Older versions are not supported.
- The final container image may have some other dependencies, small binaries that you want to include, and all of those need to be compiled for ARM. In our case, that was tini and linkerd-await.
Summary
We are still experimenting with ARM builds. The hardest part is changing the CI pipeline, where we build a bunch of other stuff. Jenkins nodes are one of the most significant computing expenses, and the speed of our builds is critical to us. So, using Graviton2 instances would be a big money saver, maybe a time saver as well. I cannot talk about real-world performance, as this is still a work in progress. One thing is sure, Java is ready for production deployments on ARM, and it seems like the right time to explore it further.