Dynamic parallel stages in Jenkins

Jenkins Declarative Pipeline v1.2 added support for Parallel Stages, which is a great and easy way to, well, run multiple aspects of a job – in parallel.

In most cases this would suffice and you could author a simple parallel block as described in the Jenkins documentation (See blog post.

But what if you need to generate multiple parallel runs in a job – dynamically?
One way would be as follows.

Lets assume a have a package.json that defines different script executions, such as:

"scripts": {
"test1": "jenkins-mocha --recursive --reporter mocha-multi-reporters --reporter-options configFile=config/mocha-config.json test/test1.js",
"test2": "jenkins-mocha --recursive --reporter mocha-multi-reporters --reporter-options configFile=config/mocha-config.json test/test2.js"
},

In order to run these in parallel you could do this in a stage:

packageJson = readJSON file: ('package.json')
tests = packageJson.scripts
listOfTests = tests.keySet()

for (int i = 0 ; i < listOfTests.size() ; i++) {
test = listOfTests[i]
parallelTests[test] = {
sh "npm run $test"
}
}

parallel parallelTests

This would then generate a similar flow as depicted below (replace “Test on Linux/Windows” with “test1” and “test2”).

If one of the parallel executions will fail, this will then fail the job once all parallel runs have finished.

Using Helm helper files to manage shared properties across multiple micro-services

In Helm we use Charts. In Cloud services, a service is typically separated to multiple micro-services.
In Helm lingo then, a micro-service could be considered as a Chart.

Despite being separate to multiple micro-services, it is likely that some properties would be the same across some of them. You would typically resolve this by either duplicating the properties in each micro-service’s ConfigMap (not recommended), or by using a “common ConfigMap” (works well).

But what to do if you want to switch over to using Helm? In Helm you can’t use a common ConfigMap across charts since a Chart can only manage its own “resources” – you cannot use the same common ConfigMap in multiple Charts as Helm will complain the resource already exists, and creating another Chart just for the common ConfigMap seems a bit much (but doable) IMO.

To the rescue come Helm helper (.tpl) files, also called “Partials”. These files are basically “includes” that can be embedded into existing files while a Chart is being installed. Lets see an example.

Lets assume you want to add some common properties to your ConfigMap, to do so create a .tpl file. Make sure the filename starts with an underscore.

_my-common-configmap.tpl:

{{- define "my-common-configuration" -}}
# Evironment
clusterEnv: {{ index .Values.global.clusterEnv .Values.global.env | quote }}
namespace: {{ index .Values.global.namespace .Values.global.env | quote }}

Also create a YAML file to store the values (assuming you need different common properties for different environments…).

my-values.yaml:

--
global:
  # env refers to the environment properties belong to during build-time. Default is "development".
  env: development

  clusterEnv:
    development: staging
    pre_production: preprod

  registry:
    development: dev-images
    pre_production: preprod-images

Next, in the micro-service’s (Chart) ConfigMap, include the helper file.
Note the use of include to load the configuration data.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap-name
  namespace: {{ index .Values.global.namespace .Values.global.env }}
data:
  log_level: {{ index .Values.global.log_level .Values.global.env | quote }}
  ...
  ...
  ...
{{ include "my-common-configuration" . | indent 2 }}

The generated result would look like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap-name
  namespace: dev
data:
  log_level: debug
  ...
  ...
  ...
  namespace: staging
  registry: dev-images

You can repeat this in each micro-service that needs the common properties.
And of course, remember to specify the properties in your Deployment…

- name: clusterEnv
  valueFrom:
    configMapKeyRef:
       name: my-configmap-name
       key: clusterEnv

But wait, what do I actually do with these files?
The .tpl file should be placed in your Chart’s templates folder and the values YAML file at the root of the Chart directory.

Since the files are shared across multiple micro-services, and each micro-service is likely its own repository, stored them in some Helm or DevOps repository. Configure your pipeline so that these files will get picked and placed in the right location during job execution.

If you’re wondering how to determine which value gets picked from the values YAML file, see my previous blog post, Upgrading to Helm.

Additional reading material

Upgrading to Helm

Helm, the Package Manager for Kubernetes is a wonderful addition to anyone doing DevOps operations with Kubernetes.

With Helm you can:

  • Manage development and production properties
  • Validate and test configuration resources prior to deployment
  • Group deployment resources into Releases
  • Rollback failed deployments

Learn about the Helm basics in the Helm documentation.

Manage development and production properties

You can manage the values.yaml file in a way that will allow a simpler access to environment properties. For example:

---
global:
  env: development

name: myMicroService
tag:
  development: "0.0.1"
  production: "0.0.2"
log_level:
  development: debug
  production: info

How to choose the environment to use? When running commands, such as helm lint, helm template, helm install or helm upgrade, you can pass the env variable. By passing it the YAML parser will know which property to apply to the template:

sh """
    helm template . --set "env=$ENV"
"""
  • $ENV refers to an environment variable set to either “development” or “production” during the lifespan of a pipeline execution.

Note that this also requires modifying the Kubernetes resource template files. For example, in the ConfigMap.template.yaml file:

log_level: {{ index .Values.global.log_level .Values.global.env | quote }}

Learn more about Go templating commands in the Sprig library.

Validate and test configuration resources prior to deployment

helm template

To generate the actual Kubernetes configuration resources based on your resource template files before executing the pipeline, use helm template.

helm lint

To test the syntax of your configuration resources in your pipeline, use helm lint:

lintResults = sh(script: "cd config/helm/${microServiceName};helm lint -f common-values.yaml || true", returnStdout:true)
if ((lintResults.contains("[ERROR]")) || (lintResults.contains("[WARNING]"))) {
   currentBuild.result = "FAILURE"
   env.shouldBuild = "false"

   slackSend (
       color: '#F01717',
       message: "*$JOB_NAME*: <$BUILD_URL|Build #$BUILD_NUMBER>, validation stage failed.\n\n${lintResults}"
   )
}

Note: the file common-values.yaml refers to an exteral values.yaml-like file containing properties that are used by multiple Charts.

Rollback failed deployments

If helm install/upgrade failed, you can revert back to the previous successful release. E.g.:

helm rollback <release name> <revision number> --force --wait --timeout 180"

To get the revision number, run

helm history <release name>

Re-using a declarative pipeline in multiple repositories via a Shared Library

Do you have an identical declarative pipeline used in multiple repositories and you find yourself updating these multiple Jenkinsfile copies every time you make a change?

Starting September 2017 Shared Libraries in Jenkins now support declarative pipelines. This means you can now load the same single source Jenkinsfile into different repositories. Here’s how to accomplish it.

Decide where you want to store the shared library, for example in a repository called “devops”.
Create a vars folder in this repository and in this folder create a .groovy file that will contain your declarative pipeline.

myPipeline.groovy

def call(body) {
    pipeline {
        // your pipeline code as-is
    }
}

Setup Jenkins to reference the shared library.

  • Navigate to Manage Jenkins > Configure System > Global Pipeline Libraries
  • Enter the name of the shared library, e.g. “myCustomSharedLibrary”
  • Retrieval method: Modern SCM
  • Source Code Management: GitHub
  • Select the repository where the shared library is at

Save your modifications.

Next, in the repository where you want to use this pipeline with, edit your Jenkinsfile to use the shared library:

Jenkinsfile

@Library("myCustomSharedLibrary") _
    myPipeline()

Be sure not to remove the _ character, it’s required!

Jenkins now knows to look at the vars folder in the repository you’ve defined, and loads the groovy file which calls the pipeline to be used in the job started by the Jenkinsfile that references it. Simple!

Here’s some reading material on this very topic:

Testing for node modules vulnerabilities

To make sure the codebase doesn’t get creeped in with various vulnerabilities in its node modules, one way to achieve this is by creating a Jenkins job to check on a daily basis against a continuously updated database of known vulnerabilities.

One such database is provided by the Node Security community and can be easily integrated using the Node Security Platform (nsp) node package.

Create a new Pipeline-type job with the following implementation (just the required parts, you may need to add some more pieces to fit it in):

  • root-of-project-must-have-node-modules-folder – a pre-existing job folder containing a cloned repository with its node modules folder
  • Packages Vulnerability Checker – the name of this job (will be created once the job is run)
#!groovy

pipeline {
   stages {
      stage ("Check for vulnerability") {
         steps {
            script {
               def vulStatus

               // Check dashboard node modules
               sh '''
                  cd ../root-of-project-must-have-node-modules-folder
                  nsp check > "../Packages Vulnerability Checker/test-results.txt"
                  nsp check
               '''

               vulStatus = readFile('test-results.txt').trim()
               if (vulStatus != "(+) No known vulnerabilities found") {
                  slackSend (
                     color: '#F01717',
                     message: "@channel $JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER> vulnerabilities found in Dashboard node modules. Check build logs."
                  )
               } 

               // additional folders to check...    
            }
         }
      }
   }
}

Creating a changelog in a pipeline job

In my pipeline, for the master branch, I’ve decided that I’d like to keep track of what each new build will contain – for debug, traceability, auditing purposes and what not…

After the flow of tests > Docker build & push, Kubernetes deployment & verification have all passed, this is the time to generate the changelog, as the last task for the pipeline after everything else has passed successfully.

Here’s how it’s done (partial snippet):

sh '''
   changelog=$(git log `git describe --tags --abbrev=0 HEAD^`..HEAD --oneline --no-merges)
   jq -n --arg tagname "v0.0.$BUILD_NUMBER"   \
      --arg name "Release v0.0.$BUILD_NUMBER" \
      --arg body "$changelog"                 \
      '{"tag_name": $tagname, "target_commitish": "master", "name": $name, "body": $body, "draft": false, "prerelease": false}'  |
   curl -d@- https://github.ibm.com/api/v3/repos/my-org-name/my-repo-name/releases?access_token=$JENKINSBOT_GHE_ACCESS_TOKEN_PSW
'''

As you can see, you will need jq installed for this.

The end result is quite nice:

Checking for vulnerabilities in Docker container images

It’s very likely that you are using a Docker container image for cloud native application. In which case, you’re probably also worry for possible security vulnerabilities in your base image.

As part of my daily work (I work for IBM), I use the IBM Bluemix Container Registry service. This service offers a Vulnerability Advisor which can let you know if there are any identified vulnerabilities in the image and also offer a detailed report. Nice.

I decided to create a Jenkins job that will check daily for such issues.
Here it is.

Note: since this is part of an IBM Bluemix service, use of the Bluemix CLI and Container Registry plug-in is required.

#!groovy

pipeline {
    stages {
        stage ("Check for vulnerability") {
            environment {
                JENKINSBOT = credentials('${JENKINSBOT_USERNAME_PASSWORD}')
            }
            steps {
                script {
                    // Login to Bluemix and the Bluemix Container Registry      
                    sh '''      
                        bx login -a ... -c ... -u $JENKINSBOT_USR -p $JENKINSBOT_PSW        
                        bx target -r ...    
                        bx cr login
                    '''

                    // Check for image vulnerability
                    isVulnerable = sh(script: "bx cr images --format '{{if and (eq .Repository \"registry.ng.bluemix.net/certmgmt_dev/node\") (eq .Tag \"6-alpine\")}}{{eq .Vulnerable \"Vulnerable\"}}{{end}}'", returnStatus: true)

                    if (isVulnerable == 1) {
                        slackSend (
                            channel: "...",
                            color: "#F01717",
                            message: "@iadar *Vulnberability Checker*: base image vulnerability detected! Run the following for a detailed report: ```bx cr va registry-name/my-namespace/node:6-alpine```"
                        )
                    }
                }
            }
        }
   }
}

Verifying a Kubernetes Deployment in a Jenkins pipeline

One of the tests I’ve been missing in my pipeline was a crucial one. Verifying that the deployment finished – successfully. e.g. that the pods were updates with a new Docker container image, but also restarted successfully(!).

It’s pretty simple I suppose, but here is my interpretation.
The following should sit inside a step:

// Verify deployment
sh '''
   export KUBECONFIG=/.../kube-config-dal12-mycluster.yml
   kubectl rollout status deployment/serviceapi --namespace=\"my-namespace\" | tail -1 > deploymentStatus.txt
'''

def deploymentStatus = readFile('deploymentStatus.txt').trim()
echo "Deployment status is: ${deploymentStatus}"
if (deploymentStatus == "deployment \"serviceapi\" successfully rolled out") {
   sh "rm -rf deploymentStatus.txt"

   // Additional logic
   )
}

else {
   error "Pipeline aborted due to deployment verification failure. Check the Kubernetes dashboard for more information."
}

A cleaning strategy for a Docker registry

With the DevOps pipeline maturing and deployment of multiple containers for multiple micro-services taking place, it became evident quite quickly that space is running out and a cleaning strategy is needed.

One way to do this is to clean the repository from images that are older than a set number of days, say, 5 days.

I am using the Bluemix Container registry so bx cr can simply be replaced with docker.

In the pipeline, use:

sh '''
    timestamp=$(date +%s -d "5 day ago")
    bx cr images --format "{{if ge $timestamp .Created}}{{.Repository}}:{{.Tag}}{{end}}" | xargs -r bx cr image-rm
'''

I use the above snippet after I have successfully built the Docker container image > pushed it to the registry and updated the image (in my case, in the Kubernetes cluster).

So, I first save in a shell variable the date value of 5 days ago. Then, using the Go format command (Docker uses Go templates) I iterate through the image repositories and compare the repository creation date with the value in $timestamp. Once it is “5 days old, or more” I delete it.

The enclosed {{.Repository}}:{{.Tag}} is important. It makes the image name and tag values available for the piped command that follows it.

xargs -r ensures the piped command will not execute if no result is passed to it (e.g., no images are >= 5 days old).

For production scenario you may want to ensure you images quota is big, so you could store images for cases where you might need to rollback, and adjust the script accordingly, or possibly also use your own storage solution for Docker container images such as jFrog Artifactory or Nexus Repository, etc.

Additionally, I also docker rmi 0.0.$BUILD_NUMBER the Docker container image that I build at the very beginning of the deployment stage of the pipeline as the image is pushed to the registry, and so there is no need to store it twice: in the build machine and in the registry.

Deploying to a Kubernetes cluster

As you may know, Kubernetes is all the rage these days. Kubernetes. Its feature list is impressive and it is no wonder why it is the go-to system of orchestrating your containers.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

I wanted to share my pipeline for building and updating containers in a Kubernetes cluster. In fact it’s quite straightforward. The pipeline includes: building a Docker container image, pushing the image to a container registry and updating the container image used in a Pod.

My environment is based in IBM Bluemix, so some commands will not apply…

stage ("Publish to Kubernetes cluster") {
   environment {
      JENKINSBOT = credentials('credentials-ID')
   }

   when {
      branch "develop"
   }

   steps {
      script {
         STAGE_NAME = "Publish to Kubernetes cluster"

         // Login to Bluemix and the Bluemix Container Registry
         sh '''
            bx login ...
            bx cr login
         '''

         // Build the Docker container image and push to Bluemix Container Registry
         sh '''
            docker build -t registry.../myimage:0.0.$BUILD_NUMBER --build-arg NPM_TOKEN=${NPM_TOKEN} .
            docker push registry.../myimage:0.0.$BUILD_NUMBER
         '''

         // Check for image vulnerabilities - applies only if you have such a service...
         isVulnerable = sh(script: "bx cr images --format '{{if and (eq .Repository \"registry.../myimage\") (eq .Tag \"0.0.$BUILD_NUMBER\")}}{{eq .Vulnerable \"Vulnerable\"}}{{end}}'", returnStatus: true)

         if (isVulnerable=="true") {
            error "Image may be vulnerable! failing the job."
         }

         // Apply Kubernetes configuration and update the pods in the cluster
         sh '''
            export KUBECONFIG=/home/idanadar/.bluemix/plugins/container-service/clusters/certmgmt/kube-config.yml
            kubectl set image deployment myimage myimage=registry.../myimage:0.0.$BUILD_NUMBER --record
         '''

         // If reached here, it means success. Notify
         slackSend (
            color: '#199515',
            message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER> Kubernetes pod update passed successfully."
         )
      }
   }
}

Notes:
* I use $BUILD_NUMBER as the means to tag the image.
* I use a pre-defined export... to configure the session with the required configuration for the kubectl CLI to know which cluster to work with.
* The Bluemix Container Registry provides image scanning for vulnerabilities!
* I use kubectl set image ... to update the image used in the Pod(s). Works great with the replica setting.

More on Kubernetes in a later blog post.