Last update:
Continuous integration and delivery or CI/CD is the most crucial part of DevOps, and cloud-native too. CI/CD connects all the bits. With Kubernetes cluster deploying Jenkins server is easy. Of course thanks to Helm. The hard part is creating pipeline which builds, deploys and test your software. The focus of this post is understanding the Jenkins pipeline and what is happening in the background when running on Kubernetes.
Deploy Jenkins on Kubernetes
Deploying Jenkins is the easy part as I said at the beginning. Let's install Jenkins with official Helm chart on Kubernetes cluster:
⚡ helm install --name jenkins \
--namespace jenkins \
stable/jenkins
Of course, if you want to change some default values, you can copy this configuration file and make some changes there. Then use your custom file with helm install
command to deploy the Jenkins to fit your needs. For this blog post, I deployed Jenkins using default values with those plugins:
InstallPlugins:
- kubernetes:1.20.1
- workflow-job:2.35
- workflow-aggregator:2.6
- credentials-binding:1.20
- git:3.12.1
- blueocean:1.19.0
Kubernetes plugin will be automatically installed and ready to use.
Creating and Understanding Pipeline
Creating a pipeline is why I wrote this blog post. You can find a lot Jenkinsfile
examples out there. However, if you do copy/paste and try to change something you could encounter some issues that you don't quite understand. Creating a pipeline is not an easy job, and I learned that the hard way.
#Jenkins pipeline expert could be a job description. This stuff is complex when you add #docker, #k8s, helm and other tools in the mix!#cicd
— Alen Komljen (@alenkomljen) February 13, 2018
When Jenkins master schedules the new build, it creates a Jenkins slave pod. Each step you define in different stages starts in a container. For example, if you need to build some Java code and you want to use Gradle, you need to specify a gradle container to do the job. Here is the Jenkinsfile
example:
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Test') {
try {
container('gradle') {
sh """
pwd
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
}
catch (exc) {
println "Failed to test - ${currentBuild.fullDisplayName}"
throw(exc)
}
}
stage('Build') {
container('gradle') {
sh "gradle build"
}
}
stage('Create Docker images') {
container('docker') {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
}
}
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
stage('Run helm') {
container('helm') {
sh "helm list"
}
}
}
}
The first thing you notice is that this is a scripted pipeline. The declarative pipeline is good in most cases, but unfortunately, it's not quite ready for Kubernetes. Watch this issue if you want to track the development progress. Using scripted pipeline is not a bad thing, but writing it is more advanced and takes more time.
Let's break down this Jenkinsfile
into several pieces. The first part is the workaround because of the bug in kubernetes plugin:
def label = "worker-${UUID.randomUUID().toString()}"
I defined a variable with random UUID, so that pod label is different on each run. I encountered an issue when I tried to update the image in pod template, but it didn't reflect the changes in the pod. The new Kubernetes plugin version 1.2.1 fixes the issue, but I didn't have a chance to test it yet.
The next part is a pod template where you can define your container images and volumes, among other things:
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
])
It is worth mentioning that command: 'cat', ttyEnabled: true
keeps container running. By default, Jenkins uses JNLP slave agent as an executor. This agent is also the part of the pod. Pods in Kubernetes can run many containers. When Jenkins launches JNLP slave agent, all other containers defined in pod template starts with it. They are all running in the same pod and on the same host. If you run the describe pod command when your slave is running, you will see all containers in it. JNLP slave agent plus any extra container you defined in pod template:
⚡ kubectl get po jenkins-slave-qvv8b-zg0pg -o jsonpath="{.status.containerStatuses[*].image}"
docker:latest gradle:4.5.1-jdk9 lachlanevenson/k8s-kubectl:v1.8.8 lachlanevenson/k8s-helm:latest jenkins/jnlp-slave:alpine
The second part of the pod template are the volumes. The volumes are defined per pod and thus mounted in every container. The volume /var/run/docker.sock
is for Docker container to be able to run docker commands, and volume /home/gradle/.gradle
acts as a cache on the underlying host.
In node closure, you can checkout code repo and define some variables. Some of them are not even used in above Jenkinsfile
, but here they are as examples:
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
A full list of available SCM variables that you can use:
GIT_BRANCH
GIT_COMMIT
GIT_LOCAL_BRANCH
GIT_PREVIOUS_COMMIT
GIT_PREVIOUS_SUCCESSFUL_COMMIT
GIT_URL
So, with all these containers where is your workspace? Workspace is also shared between the containers you defined. kubectl describe
command will give you the exact location /home/jenkins from workspace-volume (rw)
. You need to checkout the code repo because the worker is disposable and doesn't share workspace with the master. If you run pwd
command in any container, you see the same workspace dir /home/jenkins/workspace/<JOB_NAME>_<BRANCH_NAME>-VWH7HI3TT3DZNELHV2FSMYHSLYUK2FXGM432ZR7UPED5ZWXZ6DTA
.
Important: Jenkins JNLP slave agent is running with Jenkins user. The Jenkins user UID/GID is 10000, which means that workspace owner is the same user. If you are using root user in other containers to do some work, you will not have any problems. However, in the above example, I used the official gradle container image which is running with the gradle user. The issue is that this user has UID/GID of 1000. This means that gradle commands will probably fail because of permission issues. To fix it you would need to update gradle container to use 10000 as UID/GID for the gradle user or to use custom JNLP slave agent image. You can define non-default JNLP image in pod template also:
containerTemplate(name: 'jnlp', image: 'customnamespace/jnlp-slave:latest', args: '${computer.jnlpmac} ${computer.name}')
So the last part is to run different stages as you would normally do. The only difference is that you need to specify the container name also:
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
Everything within a sh
closure is running in the shared workspace. You are specifying different container to run specific commands. Be careful with environment variables. Single quotes vs. double quotes and how to access different variables in Groovy. For example when using double quotes:
${var} = Groovy parameter
\$var = Bash parameter
Also, environment variables like GIT_COMMIT
or GIT_BRANCH
are not available inside the containers, but you can define them like this:
container('gradle') {
sh """
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
In case you need to authenticate docker to DockerHub, create a new username and password credential with dockerhub
ID and then use withCredentials
pipeline script code to expose username and password as environment variables:
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
The only thing left is to add Jenkinsfile
to your code repository and create a new multibranch job on Jenkins master.
Summary
This article was the introduction of Kubernetes pipelines with Jenkins. You can have the Jenkins server running on your Kubernetes cluster and use all the resources of that environment. You also have disposable executors used to build, test and run your software from this pipeline. Please leave a comment for any question you may have. Stay tuned until the next one.