kubernetes, jenkins, ci/cd, pipeline

Set Up a Jenkins CI/CD Pipeline with Kubernetes

Continuous integration and delivery or CI/CD is the most important part of DevOps, and cloud native too. CI/CD connects all the bits together. With access to Kubernetes cluster deploying Jenkins server is easy. Of course thanks to Helm. But, deploying Jenkins server is the easy part. The hard part is creating pipeline which will build, deploy and test your software. The focus of this post is understanding the Jenkins pipeline and what is happening in the background when running on Kubernetes.

Deploy Jenkins on Kubernetes

This is the easy part as I said at the beginning. Let's install Jenkins with official Helm chart on Kubernetes cluster:

⚡  helm install --name jenkins \
      --namespace jenkins \
      stable/jenkins

Of course, if you want to change some default values you can copy this configuration file and make some changes there. Then use your custom file with helm install command to deploy the Jenkins server which will fit your needs. For the purpose of this blog post, I deployed Jenkins using default values. The plugins I used are listed here:

  InstallPlugins:
    - kubernetes:1.2
    - workflow-aggregator:2.5
    - workflow-job:2.17
    - credentials-binding:1.15
    - git:3.7.0
    - ghprb:1.40.0
    - blueocean:1.4.1

Kubernetes plugin will be automatically installed and ready to use.

Creating and Understanding Pipeline

This is actually why I wrote this blog post. You will find a lot Jenkinsfile examples out there. But, if you do copy/paste and try to change something you will probably encounter some issues that you don't quite understand. Creating pipeline is not an easy job, and I learned that the hard way.

Through below Jenkinsfile example, I will explain each step. And also I will give you some links for more information. When Jenkins master schedules the new build it will create a Jenkins slave pod. Each step you define in different stages will be running in containers. For example, if you need to build some Java code and you want to use gradle, you need to specify gradle container to do the job. Here is my Jenkinsfile example:

def label = "worker-${UUID.randomUUID().toString()}"

podTemplate(label: label, containers: [
  containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
  hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
  hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
  node(label) {
    def myRepo = checkout scm
    def gitCommit = myRepo.GIT_COMMIT
    def gitBranch = myRepo.GIT_BRANCH
    def shortGitCommit = "${gitCommit[0..10]}"
    def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
 
    stage('Test') {
      try {
        container('gradle') {
          sh """
            pwd
            echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
            echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
            gradle test
            """
        }
      }
      catch (exc) {
        println "Failed to test - ${currentBuild.fullDisplayName}"
        throw(exc)
      }
    }
    stage('Build') {
      container('gradle') {
        sh "gradle build"
      }
    }
    stage('Create Docker images') {
      container('docker') {
        withCredentials([[$class: 'UsernamePasswordMultiBinding',
          credentialsId: 'dockerhub',
          usernameVariable: 'DOCKER_HUB_USER',
          passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
          sh """
            docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
            docker build -t namespace/my-image:${gitCommit} .
            docker push namespace/my-image:${gitCommit}
            """
        }
      }
    }
    stage('Run kubectl') {
      container('kubectl') {
        sh "kubectl get pods"
      }
    }
    stage('Run helm') {
      container('helm') {
        sh "helm list"
      }
    }
  }
}

The first thing you will notice is that this is a scripted pipeline. The declarative pipeline is good in most cases, but unfortunately, it's not quite ready for Kubernetes. Watch this issue if you want to track the development progress. Using scripted pipeline is not a bad thing, but writing it is more advanced and takes more time.

Let's break down this Jenkinsfile into several pieces. The first part is the workaround because of the bug in kubernetes plugin:

def label = "worker-${UUID.randomUUID().toString()}"

We defined a variable with random UUID so that pod label is different on each run. I encountered this issue when I tried to update the image in pod template, but it didn't reflect the changes in the pod. The new Kubernetes plugin version 1.2.1 fixes the issue, but I didn't have a chance to test it yet.

The next part is a pod template where you can define your container images and volumes, among other things:

podTemplate(label: label, containers: [
  containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
  containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
  hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
  hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
])

It is worth mentioning that command: 'cat', ttyEnabled: true keeps container running. By default, Jenkins uses JNLP slave agent as an executor. This agent is also the part of the pod. Pods in Kubernetes can run many containers. When Jenkins launches JNLP slave agent all other containers defined in pod template will start. They are all running in the same pod and on the same host. If you run the describe pod command when your slave is running you will see 5 containers in this case. JNLP slave agent plus any extra container you defined in pod template:

⚡  kubectl get po jenkins-slave-qvv8b-zg0pg -o jsonpath="{.status.containerStatuses[*].image}"
docker:latest gradle:4.5.1-jdk9 lachlanevenson/k8s-kubectl:v1.8.8 lachlanevenson/k8s-helm:latest jenkins/jnlp-slave:alpine

The second part of the pod template are the volumes. We define volumes per pod and thus mounting them in every container. The volume /var/run/docker.sock is for Docker container to be able to run docker commands, and volume /home/gradle/.gradle acts as a cache on the underlying host.

In node closure you can checkout code repo and define some variables. Some of them are not even used in above Jenkinsfile, but here they are as examples:

  node(label) {
    def myRepo = checkout scm
    def gitCommit = myRepo.GIT_COMMIT
    def gitBranch = myRepo.GIT_BRANCH
    def shortGitCommit = "${gitCommit[0..10]}"
    def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)

This is the full list of available scm variables that you can use:

GIT_BRANCH
GIT_COMMIT
GIT_LOCAL_BRANCH 
GIT_PREVIOUS_COMMIT
GIT_PREVIOUS_SUCCESSFUL_COMMIT
GIT_URL

So, with all these containers where is my workspace? Workspace is also shared between the containers you defined. Pod describe command will give you the exact location /home/jenkins from workspace-volume (rw). You need to checkout the code repo because the worker is disposable and doesn't share workspace with the master. If you run pwd command in any container, you will get the same workspace dir /home/jenkins/workspace/<JOB_NAME>_<BRANCH_NAME>-VWH7HI3TT3DZNELHV2FSMYHSLYUK2FXGM432ZR7UPED5ZWXZ6DTA.

Important: Jenkins JNLP slave agent is running with Jenkins user. The Jenkins user UID/GID is 10000, which means that workspace owner is the same user. If you are using root user in other containers to do some work you will not have any problems. But, in above example, I used the official gradle container image which is running with the gradle user. The issue is that this user has UID/GID of 1000. This means that gradle commands will probably fail because of permission issues. To fix it you would need to update gradle container to use 10000 as UID/GID for the gradle user or to use custom JNLP slave agent image. You can define non-default JNLP image in pod template also:

containerTemplate(name: 'jnlp', image: 'customnamespace/jnlp-slave:latest', args: '${computer.jnlpmac} ${computer.name}')

And the last part is to run different stages like you would normally do. The only difference is that you need to specify the container name where stage commands will run:

stage('Run kubectl') {
  container('kubectl') {
    sh "kubectl get pods"
  }
}

Everything within a sh closure is running in the shared workspace. You are specifying different container to run specific commands. Be careful with environment variables. Single quotes vs double quotes and how to access different variables in Groovy. For example when using double quotes:

${var} = Groovy parameter
\$var = Bash parameter

Also, environment variables like GIT_COMMIT or GIT_BRANCH are not available inside the containers, but you can define them like this:

container('gradle') {
  sh """
    echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
    echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
    gradle test
    """
}

In case you need to authenticate docker to DockerHub, create a new username and password credential with dockerhub ID and then use withCredentials pipeline script code to expose username and password as environment variables:

withCredentials([[$class: 'UsernamePasswordMultiBinding',
  credentialsId: 'dockerhub',
  usernameVariable: 'DOCKER_HUB_USER',
  passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
  sh """
    docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
    docker build -t namespace/my-image:${gitCommit} .
    docker push namespace/my-image:${gitCommit}
    """
}

The only thing left is to add Jenkinsfile to your code repository and create a new multibranch job on Jenkins master.

Summary

This was the introduction of Kubernetes pipelines with Jenkins. You can have the Jenkins server running on your Kubernetes cluster and use all the resources of that environment. You will also have disposable executors used to build, test and run your software from this pipeline. Please leave a comment for any question you may have. Stay tuned until the next one.