A few months back the guys over at Jenkins released a new take on creating pipelines dubbed “Declarative”. You can read all about it in this blog post.
Being the new guy, I went for it as I knew nothing else. I’m glad I did, though, because I do like it. It allowed me to quickly ramp up and create pipelines for my team to start working, fast.
I’d like to share some small-but-usual tips from what I’ve learned thus far.
(I will update this blog post with more tips as time goes by).
Triggers
Apparently in the old world of Jenkins, to schedule when should a job run you’d have a Scheduler option in the UI of the job, but in a declarative pipeline where you’re using a Jenkinsfile to define anything and everything, it wasn’t immediately clear to me how would Jenkins know when to start a job, if its configuration is in a static file in Git, away from Jenkins?
Well, the answer is obvious in retrospect. You first use the triggers
directive to create a schedule, e.g. using cron
:
triggers {
cron ('0 12,16,20 * * 1-4,7')
}
Then, the first time this job is run this information is saved in Jenkins so it’ll know when to automatically start the job again. If you find out you need to update the schedule, you’ll need to make sure to start the job manually (or if you have a github webhook like me, it’ll start automatically after you commit and push the changes).
You can verify the configuration by going into the job and clicking the “View Configuration” button in the sidebar.
Like I said – it feels kinda obvious… but it’s not documented and wasn’t obvious to me at the time.
Colors in the console log
When I first started, I had to inspect the logs (and still do…) whenever stuff break. The logs looked bad. This is because the output of the logs was using some coloring but by default Jenkins doesn’t handle this.
To resolve this, install the AnsiColor plug-in and add the following:
pipeline {
...
...
options {
ansiColor('xterm')
}
stages {
...
...
}
}
Email notifications
Email notifications is easy, but just know that if you’re using a multi-branch job type, you will not be able to use the CHANGE_AUTHOR_EMAIL
and CHANGE_ID
environment variables unless you set the Build origin PRs (merged with base branch) checkbox. Then these variables will be available to you, but only when the job originated from a Pull Request. This is due to an open Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-40486.
Here I send an email to the author of a pull request which its build failed, e.g.:
emailext (
attachLog: true,
subject: '[Jenkins] $PROJECT_NAME job failed',
to: "$env.CHANGE_AUTHOR_EMAIL",
replyTo: 'your@email.address',
body: '''You are receiving this email because your pull request failed during a job execution. Review the build logs.\nPlease find out the reason why and submit a fix.'''
)
Slack notifications
Also easy. Here I try to make it very clear which stage in the pipeline failed and provide the job name, link to the pull request as well as to the build log. e.g:
stage ("SonarQube analysis") {
steps {
script {
STAGE_NAME = "SonarQube analysis"
}
...
}
}
...
post {
failure {
slackSend (
color: '#F01717',
message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER>, '$STAGE_NAME' stage failed."
)
}
}
Another message
variation:
message: "my-repo-name/<$CHANGE_URL|PR-$CHANGE_ID>: <$BUILD_URL|Build #$BUILD_NUMBER> passed successfully."
Credentials
In declarative pipelines, you can now use the credentials
directive to more gracefully obtain stored credentials… for example for username/password credentials:
stage ("do something") {
environment {
CRED = credentials('credentials-id-number')
}
steps {
sh "... $CRED.PSW / $CRED.USR"
}
}
The CRED
variable will have additional variables defined for it: .PSW
and .USR
.