Re-using a declarative pipeline in multiple repositories via a Shared Library

Do you have an identical declarative pipeline used in multiple repositories and you find yourself updating these multiple Jenkinsfile copies every time you make a change?

Starting September 2017 Shared Libraries in Jenkins now support declarative pipelines. This means you can now load the same single source Jenkinsfile into different repositories. Here’s how to accomplish it.

Decide where you want to store the shared library, for example in a repository called “devops”.
Create a vars folder in this repository and in this folder create a .groovy file that will contain your declarative pipeline.

myPipeline.groovy

def call(body) {
    pipeline {
        // your pipeline code as-is
    }
}

Setup Jenkins to reference the shared library.

  • Navigate to Manage Jenkins > Configure System > Global Pipeline Libraries
  • Enter the name of the shared library, e.g. “myCustomSharedLibrary”
  • Retrieval method: Modern SCM
  • Source Code Management: GitHub
  • Select the repository where the shared library is at

Save your modifications.

Next, in the repository where you want to use this pipeline with, edit your Jenkinsfile to use the shared library:

Jenkinsfile

@Library("myCustomSharedLibrary") _
    myPipeline()

Be sure not to remove the _ character, it’s required!

Jenkins now knows to look at the vars folder in the repository you’ve defined, and loads the groovy file which calls the pipeline to be used in the job started by the Jenkinsfile that references it. Simple!

Here’s some reading material on this very topic:

Explicitly triggering a Jenkins job using a keyword

Testing is important. To that end a developer should implement unit tests to assure any implemented code does what it is meant to do, and also integration tests with networking mocks to, for example, make sure endpoints do what they’re supposed to do. Then there are end-to-end tests to assure that the code works properly in tandem with other players.

It’s good to set end-to-end tests suites to run continuously according to a schedule to capture errors that managed to sneak it. It’s also good to trigger those end-to-end tests explicitly when you know that you’ve added risky code and would like the extra check.

Similarly to how I’ve implemented skipping builds in declarative pipelines, I have implemented the same concept here as well:

post {
    // Run end-to-end tests, if requested
    success {
        script {
            if (BRANCH_NAME == "develop") {
                result = sh (script: "git log -1 | grep '.*\\[e2e\\].*'", 
returnStatus: true)
                if (result == 0) {
                    build job: '****/master', wait: false
                }
            }
        }
    }
    ...
}

Once a job for a micro-service has finished its run successfully, I am checking if the git commit log contains the keyword “[e2e]”. If yes, this triggers a run of a job that does the end-to-end testing. Note that this is a multi-branch job and so need to specify both the job name and the branch name.

Skipping builds in a multi-branch job

Skipping builds in Jenkins is possible, there’s a “CI Skip” plug-in for Jenkins, but this plug-in doesn’t work with multi-branch jobs… How can you skip builds then in such a case? Here’s my take on it. Of course, feel free to modify it for your  pipeline’s needs.

I originally wrote about this in my Stack Overflow question: https://stackoverflow.com/questions/43016942/can-a-jenkins-job-be-aborted-with-success-result

Some context
Lets say a git push to GitHub was made, and this triggers either the pull_request or push webhook. The webhook basically tells Jenkins (if you have your Jenkins’ master URL set up there) that the job for this repository should start… but we don’t want a job to start for this specific code push. To skip it I could simply error the build (instead of the if statement below) but that’d produce a “red” line in the Stage View area and that’s not nice. I wanted a green line.

  1. Add a boolean parameter:
    pipeline {
     parameters {
         booleanParam(defaultValue: true, description: 'Execute pipeline?', name: 'shouldBuild')
     }
     ...
    
  2. Add the following right after the repository is checked out. We’re checking if the git commit has “[ci skip]” in the very latest commit log.
    stages {
     stage ("Skip build?") {
         result = sh (script: "git log -1 | grep '.*\\[ci skip\\].*'", returnStatus: true)
         if (result == 0) {
             echo ("This build should be skipped. Aborting.")
             env.shouldBuild = "false"
         }
     }
     ...
    }
    
  3. In each of the stages of the pipeline, check… e.g.:
    stage ("Unit tests") {
     when {
         expression {
             return env.shouldBuild != "false"
         }
     }
    
     steps {
         ...
     }
    }
    

If shouldBuild is true, the stage will be run. Otherwise, it’ll get skipped.

Tips for declarative pipelines in Jenkins

A few months back the guys over at Jenkins released a new take on creating pipelines dubbed “Declarative”. You can read all about it in this blog post.

Being the new guy, I went for it as I knew nothing else. I’m glad I did, though, because I do like it. It allowed me to quickly ramp up and create pipelines for my team to start working, fast.

I’d like to share some small-but-usual tips from what I’ve learned thus far.
(I will update this blog post with more tips as time goes by).

Triggers

Apparently in the old world of Jenkins, to schedule when should a job run you’d have a Scheduler option in the UI of the job, but in a declarative pipeline where you’re using a Jenkinsfile to define anything and everything, it wasn’t immediately clear to me how would Jenkins know when to start a job, if its configuration is in a static file in Git, away from Jenkins?

Well, the answer is obvious in retrospect. You first use the triggers directive to create a schedule, e.g. using cron:

triggers {
    cron ('0 12,16,20 * * 1-4,7')
}

Then, the first time this job is run this information is saved in Jenkins so it’ll know when to automatically start  the job again. If you find out you need to update the schedule, you’ll need to make sure to start the job manually (or if you have a github webhook like me, it’ll start automatically after you commit and push the changes).

You can verify the configuration by going into the job and clicking the “View Configuration” button in the sidebar.
Like I said – it feels kinda obvious… but it’s not documented and wasn’t obvious to me at the time.

Colors in the console log

When I first started, I had to inspect the logs (and still do…) whenever stuff break. The logs looked bad. This is because the output of the logs was using some coloring but by default Jenkins doesn’t handle this.

To resolve this, install the AnsiColor plug-in and add the following:

pipeline {
   ...
   ...

   options {
      ansiColor('xterm')
   }

   stages {
      ...
      ...
   }
}

Email notifications

Email notifications is easy, but just know that if you’re using a multi-branch job type, you will not be able to use the CHANGE_AUTHOR_EMAIL and CHANGE_ID environment variables unless you set the Build origin PRs (merged with base branch) checkbox. Then these variables will be available to you, but only when the job originated from a Pull Request. This is due to an open Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-40486.

Here I send an email to the author of a pull request which its build failed, e.g.:

emailext (
    attachLog: true,
    subject: '[Jenkins] $PROJECT_NAME job failed',
    to: "$env.CHANGE_AUTHOR_EMAIL",
    replyTo: 'your@email.address',
    body: '''You are receiving this email because your pull request failed during a job execution. Review the build logs.\nPlease find out the reason why and submit a fix.'''
)

Slack notifications

Also easy. Here I try to make it very clear which stage in the pipeline failed and provide the job name, link to the pull request as well as to the build log. e.g:

stage ("SonarQube analysis") {
    steps {   
        script {
            STAGE_NAME = "SonarQube analysis"
        }
        ...
    }
}
...

post {
    failure {
        slackSend (
            color: '#F01717',
            message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER>, '$STAGE_NAME' stage failed."
        )
    }
}

Another message variation:

message: "my-repo-name/<$CHANGE_URL|PR-$CHANGE_ID>: <$BUILD_URL|Build #$BUILD_NUMBER> passed successfully."

Credentials

In declarative pipelines, you can now use the credentials directive to more gracefully obtain stored credentials… for example for username/password credentials:

stage ("do something") {
    environment {
        CRED = credentials('credentials-id-number')
    }         

    steps {
        sh "... $CRED.PSW / $CRED.USR"
    }
}

The CRED variable will have additional variables defined for it: .PSW and .USR.