Our Jenkins cluster had become a paralyzing mass of jobs, executor dependencies, and general complexity, not what you want from the tool at the heart of our small engineering team. The result of our Jenkins disorganization was fear. No one wanted to touch Jenkins, nevermind improve it. Something had to change.
As we looked toward what we wanted from Jenkins, we laid out these goals:
- Ensure all tests could be run locally by developers (not just unit tests)
- Enable our developers to contribute to our CI/CD process without deep knowledge of Jenkins.
- Store all aspects of Jenkins in version control (Master & Executor setup, Jobs, etc.)
What follows is an overview of the patterns we used to simplify and streamline our CI/CD workflow, and empower our engineers:
Jenkins Pipelines became available with the release of Jenkins 2.0. Pipelines mark a key shift from the traditional approach to building Jenkins jobs, either through the GUI or Groovy DSL.
The Jenkins Pipeline approach breaks your CI/CD process into a series of discrete, declarative steps. When a step fails, subsequent jobs fail. Even better, instead of living in Jenkins, pipeline steps are encapsulated in your application’s source code. This approach has a couple of profound implications:
- Changes to a project’s CI/CD release pipeline are managed on a per project basis. This allows CI/CD changes to be tested prior to being merged to the master branch. Pipeline changes can be peer reviewed through the Pull Request workflow, allowing better visibility to the larger team.
- The dependencies of a release pipeline are included in the project. With a bit of careful design, your developers can build and run each step locally prior to committing changes. This empowers developers to meaningfully contribute without deep knowledge of Jenkins. For our small team, this empowerment radically reduces the task load of those people responsible for maintaining Jenkins.
As we ported our complex build and test process from Jenkins DSL jobs to Pipelines, we noticed a reduction in complexity, increased contributions to Jenkins CI/CD flow, and a reduction in failures due to fragile plugins.
We’ve been heavy users of Docker for a long time. It’s an incredibly powerful tool because it allows us to package all of an application’s dependencies within a single lightweight container that runs nearly anywhere. With Docker Compose, we can construct clusters of containers, allowing us to create complex environments for robust integration testing.
Docker also plays a large role in reducing the complexity of our Jenkins Executors (the servers responsible for executing tests). Executors receive two additional libraries beyond the Jenkins requirements: Docker and Summon (we’ll talk about Summon in more detail later in this post). With Docker, we no longer need to install additional dependencies on our Jenkins executors. A few examples include:
- AWS CLI
- Ansible & Molecule
- Test Kitchen
Moving these tools into Docker containers means we never need to install new packages on our Jenkins Executor, we just pull (or build) a new image. It’s made upgrading tools a snap, massively simplified Executor setup, and allows our developers to easily troubleshoot CI/CD issues locally.
Distinct Stages and Steps
A pattern we’ve adopted for our Jenkinsfile is to make each of our pipeline stage steps a single shell file. For example, we might have the following stages (and files) in a project:
- Test (
- Package (
- Publish (
Each stage has a single file that encapsulates the actions required of that stage. This has made it very easy to build and test individual stages locally. As an example, here’s a snippet of a Jenkinsfile from one of our projects:
For a clearer picture of what our pipeline looks like, here are examples of the full Jenkinsfile, test.sh, publish.sh, and supporting docker-compose.yml files:
Small, simple steps keep complexity out of our Jenkins pipeline, and let the team to write and run pipeline steps locally.
Credential Management with Summon and Conjur
Secrets and automation aren’t the best experience in Jenkins. We didn’t want to manage credentials through the UI, which makes rotation extra non-trivial. Fortunately, our core product works great for managing and rotating credentials (checkout the open source version here: conjur.org). Each Jenkins executor receives a Conjur identity when it’s provisioned. Summon uses the executor’s identity to authenticate and retrieve credentials from Conjur.
Executor labels are used to ensure jobs are run on executors with permission to access different sets of credentials. This lets us provide executors access to only those secrets required to perform their function. As an example, the executor labeled
releaser has access to our RubyGems credentials, and runs the job responsible for publishing our API gem:
Summon is an open source tool that makes it dead simple to get credentials into a process securely. Here’s an example exporting a value stored in Conjur:
Here, Summon retrieves the credential stored in the Conjur variable
rubygems/api-key, and makes that credential available to the child process (
env), as an environment variable. Alternatively, for multiple credentials, Summon can use YAML file (
secrets.yml by default) for variable mapping:
Only values that begin with the keyword
!var are resolved to Conjur. This enables Summon to manage additional environment variables. Summon would resolve the above
secrets.yml file as:
With Summon, we keep secrets off disk, and provide nearly any process access to secrets stored in Conjur. Summon is written in Golang and compiled for a number of operating systems. It offers a number of different providers in addition to Conjur, including S3, Keychain, and Chef encrypted databags, and more. You can read more about Summon on the project page: https://cyberark.github.io/summon/
With our secrets/credentials outside Jenkins, we can easily rotate passwords without manually updating Jenkins.
Most of our current active projects take advantage of Jenkins Pipelines. Although we’ve made great strides in simplifying and streamlining our project pipelines, code duplication is still a problem between project. We’re exploring Pipeline Plugins as a possible tool for sharing core functionality between projects. We’re also looking into approaches to provide more isolated access to credentials within our pipeline stages.
CI/CD is about continuous improvement. We’ll continue to improve our process for testing and release to help us deliver value faster. If you want to know more about how we use Conjur and Jenkins, come join the conversation on the CyberArk Commons. We’d love to hear your story.