Checking for vulnerabilities in Docker container images

It’s very likely that you are using a Docker container image for cloud native application. In which case, you’re probably also worry for possible security vulnerabilities in your base image.

As part of my daily work (I work for IBM), I use the IBM Bluemix Container Registry service. This service offers a Vulnerability Advisor which can let you know if there are any identified vulnerabilities in the image and also offer a detailed report. Nice.

I decided to create a Jenkins job that will check daily for such issues.
Here it is.

Note: since this is part of an IBM Bluemix service, use of the Bluemix CLI and Container Registry plug-in is required.

#!groovy

pipeline {
    stages {
        stage ("Check for vulnerability") {
            environment {
                JENKINSBOT = credentials('${JENKINSBOT_USERNAME_PASSWORD}')
            }
            steps {
                script {
                    // Login to Bluemix and the Bluemix Container Registry      
                    sh '''      
                        bx login -a ... -c ... -u $JENKINSBOT_USR -p $JENKINSBOT_PSW        
                        bx target -r ...    
                        bx cr login
                    '''

                    // Check for image vulnerability
                    isVulnerable = sh(script: "bx cr images --format '{{if and (eq .Repository \"registry.ng.bluemix.net/certmgmt_dev/node\") (eq .Tag \"6-alpine\")}}{{eq .Vulnerable \"Vulnerable\"}}{{end}}'", returnStatus: true)

                    if (isVulnerable == 1) {
                        slackSend (
                            channel: "...",
                            color: "#F01717",
                            message: "@iadar *Vulnberability Checker*: base image vulnerability detected! Run the following for a detailed report: ```bx cr va registry-name/my-namespace/node:6-alpine```"
                        )
                    }
                }
            }
        }
   }
}

A cleaning strategy for a Docker registry

With the DevOps pipeline maturing and deployment of multiple containers for multiple micro-services taking place, it became evident quite quickly that space is running out and a cleaning strategy is needed.

One way to do this is to clean the repository from images that are older than a set number of days, say, 5 days.

I am using the Bluemix Container registry so bx cr can simply be replaced with docker.

In the pipeline, use:

sh '''
    timestamp=$(date +%s -d "5 day ago")
    bx cr images --format "{{if ge $timestamp .Created}}{{.Repository}}:{{.Tag}}{{end}}" | xargs -r bx cr image-rm
'''

I use the above snippet after I have successfully built the Docker container image > pushed it to the registry and updated the image (in my case, in the Kubernetes cluster).

So, I first save in a shell variable the date value of 5 days ago. Then, using the Go format command (Docker uses Go templates) I iterate through the image repositories and compare the repository creation date with the value in $timestamp. Once it is “5 days old, or more” I delete it.

The enclosed {{.Repository}}:{{.Tag}} is important. It makes the image name and tag values available for the piped command that follows it.

xargs -r ensures the piped command will not execute if no result is passed to it (e.g., no images are >= 5 days old).

For production scenario you may want to ensure you images quota is big, so you could store images for cases where you might need to rollback, and adjust the script accordingly, or possibly also use your own storage solution for Docker container images such as jFrog Artifactory or Nexus Repository, etc.

Additionally, I also docker rmi 0.0.$BUILD_NUMBER the Docker container image that I build at the very beginning of the deployment stage of the pipeline as the image is pushed to the registry, and so there is no need to store it twice: in the build machine and in the registry.

Deploying to a Kubernetes cluster

As you may know, Kubernetes is all the rage these days. Kubernetes. Its feature list is impressive and it is no wonder why it is the go-to system of orchestrating your containers.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

I wanted to share my pipeline for building and updating containers in a Kubernetes cluster. In fact it’s quite straightforward. The pipeline includes: building a Docker container image, pushing the image to a container registry and updating the container image used in a Pod.

My environment is based in IBM Bluemix, so some commands will not apply…

stage ("Publish to Kubernetes cluster") {
   environment {
      JENKINSBOT = credentials('credentials-ID')
   }

   when {
      branch "develop"
   }

   steps {
      script {
         STAGE_NAME = "Publish to Kubernetes cluster"

         // Login to Bluemix and the Bluemix Container Registry
         sh '''
            bx login ...
            bx cr login
         '''

         // Build the Docker container image and push to Bluemix Container Registry
         sh '''
            docker build -t registry.../myimage:0.0.$BUILD_NUMBER --build-arg NPM_TOKEN=${NPM_TOKEN} .
            docker push registry.../myimage:0.0.$BUILD_NUMBER
         '''

         // Check for image vulnerabilities - applies only if you have such a service...
         isVulnerable = sh(script: "bx cr images --format '{{if and (eq .Repository \"registry.../myimage\") (eq .Tag \"0.0.$BUILD_NUMBER\")}}{{eq .Vulnerable \"Vulnerable\"}}{{end}}'", returnStatus: true)

         if (isVulnerable=="true") {
            error "Image may be vulnerable! failing the job."
         }

         // Apply Kubernetes configuration and update the pods in the cluster
         sh '''
            export KUBECONFIG=/home/idanadar/.bluemix/plugins/container-service/clusters/certmgmt/kube-config.yml
            kubectl set image deployment myimage myimage=registry.../myimage:0.0.$BUILD_NUMBER --record
         '''

         // If reached here, it means success. Notify
         slackSend (
            color: '#199515',
            message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER> Kubernetes pod update passed successfully."
         )
      }
   }
}

Notes:
* I use $BUILD_NUMBER as the means to tag the image.
* I use a pre-defined export... to configure the session with the required configuration for the kubectl CLI to know which cluster to work with.
* The Bluemix Container Registry provides image scanning for vulnerabilities!
* I use kubectl set image ... to update the image used in the Pod(s). Works great with the replica setting.

More on Kubernetes in a later blog post.