Using multiple mocha reporters

Mocha, the “simple, flexible and fun” JavaScript test framework provides several built-in reporters. By default you can only use 1 but there may be situations where you want to use several, for example you’d like the test report to be both visible in the build log and also available in XML form ala JUnit. Luckily, you can combine these.

To do this:

  1. Install the following npm packages in your project:
    • mocha
    • mocha-junit-reporter
    • mocha-multi-reporters
  2. Create a config folder with a mocha-config.json file in it:
    {
     "reporterEnabled": "list,mocha-junit-reporter",
     "mochaJunitReporterReporterOptions": {
         "mochaFile": "testResults/results.xml"
     }
    }
    

    list is one of the built-in reporters in mocha. mochaFile is where the test results will be generated into (if the file/folder does not exist, it will be).

  3. In your npm test command (or if using a custom command, e.g. npm run unit-tests) mention the following, e.g.:

    "scripts": { 
     "test": "mocha --recursive --reporter mocha-multi-reporters --reporter-options configFile=config/mocha-config.json"
    }
    

    Note that by default mocha will look for a test folder at the root of the project. If you have a folder named differently or want mocha to look at a specific folder, be sure to state this explicitly: mocha test/unit-tests.

Explicitly triggering a Jenkins job using a keyword

Testing is important. To that end a developer should implement unit tests to assure any implemented code does what it is meant to do, and also integration tests with networking mocks to, for example, make sure endpoints do what they’re supposed to do. Then there are end-to-end tests to assure that the code works properly in tandem with other players.

It’s good to set end-to-end tests suites to run continuously according to a schedule to capture errors that managed to sneak it. It’s also good to trigger those end-to-end tests explicitly when you know that you’ve added risky code and would like the extra check.

Similarly to how I’ve implemented skipping builds in declarative pipelines, I have implemented the same concept here as well:

post {
    // Run end-to-end tests, if requested
    success {
        script {
            if (BRANCH_NAME == "develop") {
                result = sh (script: "git log -1 | grep '.*\\[e2e\\].*'", 
returnStatus: true)
                if (result == 0) {
                    build job: '****/master', wait: false
                }
            }
        }
    }
    ...
}

Once a job for a micro-service has finished its run successfully, I am checking if the git commit log contains the keyword “[e2e]”. If yes, this triggers a run of a job that does the end-to-end testing. Note that this is a multi-branch job and so need to specify both the job name and the branch name.

Failing builds when developers misbehave (code coverage)

Developers. They’re kinda like little children sometimes, aren’t they? You have to keep your eye open on ’em…

Joking aside though, code coverage is an important aspect in software development.

code coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high code coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage.

Low code coverage typically may occur when new code was implemented but tests were not added for the new code just yet.

One way to potentially solve it is developing the TDD way, but not all teams like that. So another solution is using tooling in the pipeline to fail a build when code coverage drops. One such tool is the open source SonarQube.

The prerequisites are to:

  • Install the SonarQube server on a host machine
  • Install the appropriate Sonar scanner on the same host machine
  • Install the “Quality Gates” Jenkins plug-in
  • Install the “SonarQube Scanner for Jenkins” plug-in
  • Install the SonarJS plug-in in SonarQube’s Update Center

As for failing the build, I have set up a Quality Gate in the SonarQube dashboard to error when the coverage metric is below 80%.

My implementation is as follows:

stage ("SonarQube analysis") {
   steps {   
      script {
         STAGE_NAME = "SonarQube analysis"

         withSonarQubeEnv('SonarQube') {
            sh "../../../sonar-scanner-2.9.0.670/bin/sonar-scanner"   
         }

         // Check if coverage threshold is met, otherwise fail the job
         def qualitygate = waitForQualityGate()
         if (qualitygate.status != "OK") {
            error "Pipeline aborted due to quality gate coverage failure: ${qualitygate.status}"
         }
      }
   }
}

withSonarQubeEnv('SonarQube') refers to my SonarQube configuration in Jenkins > Manage Jenkins > Configure Jenkins > SonarQube servers where Name is set to SonarQube.

Now, don’t start with 80% coverage right away, let it simmer… start with 40% or 50% and increase it by 10% each passing week. This way developers will be ‘forced’ to add tests to their code as they progress with their work, but not be totally pressured about it and will have enough time to cover everything with time.

Skipping builds in a multi-branch job

Skipping builds in Jenkins is possible, there’s a “CI Skip” plug-in for Jenkins, but this plug-in doesn’t work with multi-branch jobs… How can you skip builds then in such a case? Here’s my take on it. Of course, feel free to modify it for your  pipeline’s needs.

I originally wrote about this in my Stack Overflow question: https://stackoverflow.com/questions/43016942/can-a-jenkins-job-be-aborted-with-success-result

Some context
Lets say a git push to GitHub was made, and this triggers either the pull_request or push webhook. The webhook basically tells Jenkins (if you have your Jenkins’ master URL set up there) that the job for this repository should start… but we don’t want a job to start for this specific code push. To skip it I could simply error the build (instead of the if statement below) but that’d produce a “red” line in the Stage View area and that’s not nice. I wanted a green line.

  1. Add a boolean parameter:
    pipeline {
     parameters {
         booleanParam(defaultValue: true, description: 'Execute pipeline?', name: 'shouldBuild')
     }
     ...
    
  2. Add the following right after the repository is checked out. We’re checking if the git commit has “[ci skip]” in the very latest commit log.
    stages {
     stage ("Skip build?") {
         result = sh (script: "git log -1 | grep '.*\\[ci skip\\].*'", returnStatus: true)
         if (result == 0) {
             echo ("This build should be skipped. Aborting.")
             env.shouldBuild = "false"
         }
     }
     ...
    }
    
  3. In each of the stages of the pipeline, check… e.g.:
    stage ("Unit tests") {
     when {
         expression {
             return env.shouldBuild != "false"
         }
     }
    
     steps {
         ...
     }
    }
    

If shouldBuild is true, the stage will be run. Otherwise, it’ll get skipped.

Tips for declarative pipelines in Jenkins

A few months back the guys over at Jenkins released a new take on creating pipelines dubbed “Declarative”. You can read all about it in this blog post.

Being the new guy, I went for it as I knew nothing else. I’m glad I did, though, because I do like it. It allowed me to quickly ramp up and create pipelines for my team to start working, fast.

I’d like to share some small-but-usual tips from what I’ve learned thus far.
(I will update this blog post with more tips as time goes by).

Triggers

Apparently in the old world of Jenkins, to schedule when should a job run you’d have a Scheduler option in the UI of the job, but in a declarative pipeline where you’re using a Jenkinsfile to define anything and everything, it wasn’t immediately clear to me how would Jenkins know when to start a job, if its configuration is in a static file in Git, away from Jenkins?

Well, the answer is obvious in retrospect. You first use the triggers directive to create a schedule, e.g. using cron:

triggers {
    cron ('0 12,16,20 * * 1-4,7')
}

Then, the first time this job is run this information is saved in Jenkins so it’ll know when to automatically start  the job again. If you find out you need to update the schedule, you’ll need to make sure to start the job manually (or if you have a github webhook like me, it’ll start automatically after you commit and push the changes).

You can verify the configuration by going into the job and clicking the “View Configuration” button in the sidebar.
Like I said – it feels kinda obvious… but it’s not documented and wasn’t obvious to me at the time.

Colors in the console log

When I first started, I had to inspect the logs (and still do…) whenever stuff break. The logs looked bad. This is because the output of the logs was using some coloring but by default Jenkins doesn’t handle this.

To resolve this, install the AnsiColor plug-in and add the following:

pipeline {
   ...
   ...

   options {
      ansiColor('xterm')
   }

   stages {
      ...
      ...
   }
}

Email notifications

Email notifications is easy, but just know that if you’re using a multi-branch job type, you will not be able to use the CHANGE_AUTHOR_EMAIL and CHANGE_ID environment variables unless you set the Build origin PRs (merged with base branch) checkbox. Then these variables will be available to you, but only when the job originated from a Pull Request. This is due to an open Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-40486.

Here I send an email to the author of a pull request which its build failed, e.g.:

emailext (
    attachLog: true,
    subject: '[Jenkins] $PROJECT_NAME job failed',
    to: "$env.CHANGE_AUTHOR_EMAIL",
    replyTo: 'your@email.address',
    body: '''You are receiving this email because your pull request failed during a job execution. Review the build logs.\nPlease find out the reason why and submit a fix.'''
)

Slack notifications

Also easy. Here I try to make it very clear which stage in the pipeline failed and provide the job name, link to the pull request as well as to the build log. e.g:

stage ("SonarQube analysis") {
    steps {   
        script {
            STAGE_NAME = "SonarQube analysis"
        }
        ...
    }
}
...

post {
    failure {
        slackSend (
            color: '#F01717',
            message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER>, '$STAGE_NAME' stage failed."
        )
    }
}

Another message variation:

message: "my-repo-name/<$CHANGE_URL|PR-$CHANGE_ID>: <$BUILD_URL|Build #$BUILD_NUMBER> passed successfully."

Credentials

In declarative pipelines, you can now use the credentials directive to more gracefully obtain stored credentials… for example for username/password credentials:

stage ("do something") {
    environment {
        CRED = credentials('credentials-id-number')
    }         

    steps {
        sh "... $CRED.PSW / $CRED.USR"
    }
}

The CRED variable will have additional variables defined for it: .PSW and .USR.

Welcome

As you may have figured out, I’ll be using this space to write about random stuff and problems I encounter as I venture into the world of DevOps.

git, Jenkins pipelines, Linux, scripting, Docker, Containers, Kubernetes, Cloud Foundry, micro-services, clusters, regions, cloud native, nodejs, …

Oh man, what did I get myself into.