Using Helm helper files to manage shared properties across multiple micro-services

In Helm we use Charts. In Cloud services, a service is typically separated to multiple micro-services.
In Helm lingo then, a micro-service could be considered as a Chart.

Despite being separate to multiple micro-services, it is likely that some properties would be the same across some of them. You would typically resolve this by either duplicating the properties in each micro-service’s ConfigMap (not recommended), or by using a “common ConfigMap” (works well).

But what to do if you want to switch over to using Helm? In Helm you can’t use a common ConfigMap across charts since a Chart can only manage its own “resources” – you cannot use the same common ConfigMap in multiple Charts as Helm will complain the resource already exists, and creating another Chart just for the common ConfigMap seems a bit much (but doable) IMO.

To the rescue come Helm helper (.tpl) files, also called “Partials”. These files are basically “includes” that can be embedded into existing files while a Chart is being installed. Lets see an example.

Lets assume you want to add some common properties to your ConfigMap, to do so create a .tpl file. Make sure the filename starts with an underscore.

_my-common-configmap.tpl:

{{- define "my-common-configuration" -}}
# Evironment
clusterEnv: {{ index .Values.global.clusterEnv .Values.global.env | quote }}
namespace: {{ index .Values.global.namespace .Values.global.env | quote }}

Also create a YAML file to store the values (assuming you need different common properties for different environments…).

my-values.yaml:

--
global:
  # env refers to the environment properties belong to during build-time. Default is "development".
  env: development

  clusterEnv:
    development: staging
    pre_production: preprod

  registry:
    development: dev-images
    pre_production: preprod-images

Next, in the micro-service’s (Chart) ConfigMap, include the helper file.
Note the use of include to load the configuration data.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap-name
  namespace: {{ index .Values.global.namespace .Values.global.env }}
data:
  log_level: {{ index .Values.global.log_level .Values.global.env | quote }}
  ...
  ...
  ...
{{ include "my-common-configuration" . | indent 2 }}

The generated result would look like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap-name
  namespace: dev
data:
  log_level: debug
  ...
  ...
  ...
  namespace: staging
  registry: dev-images

You can repeat this in each micro-service that needs the common properties.
And of course, remember to specify the properties in your Deployment…

- name: clusterEnv
  valueFrom:
    configMapKeyRef:
       name: my-configmap-name
       key: clusterEnv

But wait, what do I actually do with these files?
The .tpl file should be placed in your Chart’s templates folder and the values YAML file at the root of the Chart directory.

Since the files are shared across multiple micro-services, and each micro-service is likely its own repository, stored them in some Helm or DevOps repository. Configure your pipeline so that these files will get picked and placed in the right location during job execution.

If you’re wondering how to determine which value gets picked from the values YAML file, see my previous blog post, Upgrading to Helm.

Additional reading material

Upgrading to Helm

Helm, the Package Manager for Kubernetes is a wonderful addition to anyone doing DevOps operations with Kubernetes.

With Helm you can:

  • Manage development and production properties
  • Validate and test configuration resources prior to deployment
  • Group deployment resources into Releases
  • Rollback failed deployments

Learn about the Helm basics in the Helm documentation.

Manage development and production properties

You can manage the values.yaml file in a way that will allow a simpler access to environment properties. For example:

---
global:
  env: development

name: myMicroService
tag:
  development: "0.0.1"
  production: "0.0.2"
log_level:
  development: debug
  production: info

How to choose the environment to use? When running commands, such as helm lint, helm template, helm install or helm upgrade, you can pass the env variable. By passing it the YAML parser will know which property to apply to the template:

sh """
    helm template . --set "env=$ENV"
"""
  • $ENV refers to an environment variable set to either “development” or “production” during the lifespan of a pipeline execution.

Note that this also requires modifying the Kubernetes resource template files. For example, in the ConfigMap.template.yaml file:

log_level: {{ index .Values.global.log_level .Values.global.env | quote }}

Learn more about Go templating commands in the Sprig library.

Validate and test configuration resources prior to deployment

helm template

To generate the actual Kubernetes configuration resources based on your resource template files before executing the pipeline, use helm template.

helm lint

To test the syntax of your configuration resources in your pipeline, use helm lint:

lintResults = sh(script: "cd config/helm/${microServiceName};helm lint -f common-values.yaml || true", returnStdout:true)
if ((lintResults.contains("[ERROR]")) || (lintResults.contains("[WARNING]"))) {
   currentBuild.result = "FAILURE"
   env.shouldBuild = "false"

   slackSend (
       color: '#F01717',
       message: "*$JOB_NAME*: <$BUILD_URL|Build #$BUILD_NUMBER>, validation stage failed.\n\n${lintResults}"
   )
}

Note: the file common-values.yaml refers to an exteral values.yaml-like file containing properties that are used by multiple Charts.

Rollback failed deployments

If helm install/upgrade failed, you can revert back to the previous successful release. E.g.:

helm rollback <release name> <revision number> --force --wait --timeout 180"

To get the revision number, run

helm history <release name>

Verifying a Kubernetes Deployment in a Jenkins pipeline

One of the tests I’ve been missing in my pipeline was a crucial one. Verifying that the deployment finished – successfully. e.g. that the pods were updates with a new Docker container image, but also restarted successfully(!).

It’s pretty simple I suppose, but here is my interpretation.
The following should sit inside a step:

// Verify deployment
sh '''
   export KUBECONFIG=/.../kube-config-dal12-mycluster.yml
   kubectl rollout status deployment/serviceapi --namespace=\"my-namespace\" | tail -1 > deploymentStatus.txt
'''

def deploymentStatus = readFile('deploymentStatus.txt').trim()
echo "Deployment status is: ${deploymentStatus}"
if (deploymentStatus == "deployment \"serviceapi\" successfully rolled out") {
   sh "rm -rf deploymentStatus.txt"

   // Additional logic
   )
}

else {
   error "Pipeline aborted due to deployment verification failure. Check the Kubernetes dashboard for more information."
}

Deploying to a Kubernetes cluster

As you may know, Kubernetes is all the rage these days. Kubernetes. Its feature list is impressive and it is no wonder why it is the go-to system of orchestrating your containers.

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

I wanted to share my pipeline for building and updating containers in a Kubernetes cluster. In fact it’s quite straightforward. The pipeline includes: building a Docker container image, pushing the image to a container registry and updating the container image used in a Pod.

My environment is based in IBM Bluemix, so some commands will not apply…

stage ("Publish to Kubernetes cluster") {
   environment {
      JENKINSBOT = credentials('credentials-ID')
   }

   when {
      branch "develop"
   }

   steps {
      script {
         STAGE_NAME = "Publish to Kubernetes cluster"

         // Login to Bluemix and the Bluemix Container Registry
         sh '''
            bx login ...
            bx cr login
         '''

         // Build the Docker container image and push to Bluemix Container Registry
         sh '''
            docker build -t registry.../myimage:0.0.$BUILD_NUMBER --build-arg NPM_TOKEN=${NPM_TOKEN} .
            docker push registry.../myimage:0.0.$BUILD_NUMBER
         '''

         // Check for image vulnerabilities - applies only if you have such a service...
         isVulnerable = sh(script: "bx cr images --format '{{if and (eq .Repository \"registry.../myimage\") (eq .Tag \"0.0.$BUILD_NUMBER\")}}{{eq .Vulnerable \"Vulnerable\"}}{{end}}'", returnStatus: true)

         if (isVulnerable=="true") {
            error "Image may be vulnerable! failing the job."
         }

         // Apply Kubernetes configuration and update the pods in the cluster
         sh '''
            export KUBECONFIG=/home/idanadar/.bluemix/plugins/container-service/clusters/certmgmt/kube-config.yml
            kubectl set image deployment myimage myimage=registry.../myimage:0.0.$BUILD_NUMBER --record
         '''

         // If reached here, it means success. Notify
         slackSend (
            color: '#199515',
            message: "$JOB_NAME: <$BUILD_URL|Build #$BUILD_NUMBER> Kubernetes pod update passed successfully."
         )
      }
   }
}

Notes:
* I use $BUILD_NUMBER as the means to tag the image.
* I use a pre-defined export... to configure the session with the required configuration for the kubectl CLI to know which cluster to work with.
* The Bluemix Container Registry provides image scanning for vulnerabilities!
* I use kubectl set image ... to update the image used in the Pod(s). Works great with the replica setting.

More on Kubernetes in a later blog post.