Advanced Jenkins Parallel Builds (And Jenkins Distributed Builds)

Joseph Sibony
Joseph Sibony reading time: 10 minutes
July 31, 2024

Jenkins is a great CI tool when it comes to flexibility.

Every simple thing can be done in ten different ways, including parallelizing a build. 

This article is going to take the parallel stages directive in Jenkins and extend its flexibility wherever it’s lacking.
We’ll do that by introducing some imperative logic into the pipeline’s definition.

Some Jenkins foundations

Jenkins Parallel_foundations

Before jumping into parallelization and all of that complicated stuff, let’s lay some very basic but extremely important foundations.

In essence, Jenkins is a very elaborate way of running scripts. It has a wide array of plugins, you can basically run any language that’s supported by the agent running the job, and you have a rich DSL for building jobs to run your scripts.

However, the real game-changer that allowed flexible job execution was the introduction of the pipeline plugin back in 2016.

All of a sudden you could define a job with a comfortable declarative DSL, break down a single job into multiple stages, and even run each stage on a different agent.

As a result, the introduction of the Jenkins CI/CD pipeline ended up being an enabler for parallelizing jobs, and indeed, at some point, the ‘parallel’ directive was introduced to Jenkins pipelines.

The ‘parallel’ directive allows running multiple stages in parallel by virtue of wrapping the stages you want to run in parallel with it, as seen in the following example.

Jenkinsfile

pipeline {
    agent any
    stages {
        stage(“Compile & Build Binary”) {
            parallel {
                stage(“Build X”) {
                     sh(‘cd /path/to/proj-1 && make && make publish’)
                }
                stage(“Build Y”) {
                      sh(‘cd /path/to/proj-2 && make && make publish’)
                }
            }
        }
    }
}

The above example presents the parallel building of multiple projects, by simply wrapping the stages we want to parallelize within a ‘parallel’ scope.

If you want to have an even more solid foundation on parallelizing builds with Jenkins and other CI systems, you can check out this article here that dives deep into the parallel directive basics.

So What Is This Article Going to Be About?

By saying that we’ll extend parallelization in Jenkins using some imperative logic, I mean that we will take the declarative Jenkinsfile allowing Jenkins parallel builds, and introduce some more capabilities where the parallel stages directive is lacking.

So, Where Is the Parallel Stages Feature Lacking?

It’s lacking mostly when it comes to dynamically deciding what stages should run in parallel. Jenkins parallel builds can be based on static information and decisions, in which case the declarative approach works well. But when there is a need for dynamic decisions, a new approach is required for a more advanced way of doing Jenkins parallel builds.

Here’s an example:

Let’s say we have a Git repository that contains multiple C++ projects, and our goal is to parallelize all of the projects’ builds.

  • New projects are being added to it on a weekly basis, and existing projects are being modified on a daily basis. 
  • The repository’s structure is such that for each project there is a directory in the root of the repository. 
  • Within every project, there’s a Makefile for building that project. 
  • There’s also a Jenkinsfile in the root of the repository in which the Jenkins Pipeline for the repository is defined.
company-a
├── Jenkinsfile
├── project-j
│   └── Makefile
├── project-x
│   └── Makefile
└── project-y
    └── Makefile

Whenever a change is pushed to any branch of that repository, a Jenkins pipeline that builds the artifact of each project is triggered.

This is the content of the Jenkinsfile that defines that pipeline.

pipeline { 
    agent any 
    stages { 
        stage(“Compile & Build Binary”) { 
            parallel {
                dir(“project-x”) { 
                    stage(“Build X”) { 
                        sh(‘make && make publish’) 
                    }
                }
                dir(“project-y”) { 
                    stage(“Build Y”) {
                        sh(‘make && make publish’)
                    }
                } 
            } 
        } 
    } 
}

Now, let’s say a developer is adding a new project, ‘project-j’, to the repository:

company-a
├── Jenkinsfile
├── project-j
│   └── Makefile
├── project-x
│   └── Makefile
└── project-y
    └── Makefile

The developer pushes the changes to a branch but doesn’t see the artifact of the new project they have just added.

Can you guess why? It’s because they didn’t add a stage to the Jenkinsfile for the new project!

If so, the Jenkinsfile should look like this after the addition of the new project:

pipeline {
    agent any 
    stages { 
        stage(“Compile & Build Binary”) { 
            parallel {
                dir(“project-x”) { 
                    stage(“Build X”) { 
                        sh(‘make && make publish’) 
                    }
                }
                dir(“project-y”) {
                    stage(“Build Y”) {
                        sh(‘make && make publish’)
                    }
                }
                dir(“project-j”) { 
                    stage(“Build J”) { 
                        sh(‘make && make publish’)
                    }
                }
            } 
        } 
    } 
}

As you might have already guessed, the main aspect where the declarative pipeline is lacking is that it’s not dynamic.

Declarative is not dynamic by nature, but we can change that.

We are going to tackle this issue by making our pipeline a bit less declarative (just like we promised!), but much more powerful. 

We will do that by generating the stages that will run in parallel, dynamically, based on the repository’s structure.

Choosing which projects to build dynamically based on the repository’s structure can be done in 2 steps:

  1. Generate the list of projects to be built – We’ll use the Pipeline Utility Steps plugin’s findFiles function for that
  2. Generate parallel stages based on the list

We are going to implement both of these stages using some Groovy code:

def parallelStages = [:]
def projectsToBuild = []

pipeline { 
    agent any 
    stages { 
        stage("Compile & Build Binary") { 
            steps {
                script {
                    // Find directories (for simplicity’s sake, all directories)
                    def files = findFiles()
                    files.each { f ->
                        if (f.directory) {
                            projectsToBuild.add(f.name)
                        }
                    }
                    
                    projectsToBuild.each { p ->
                        parallelStages[p] = {
                            node {
                                dir(p) {
                                    stage(p) {
                                        sh('make && make build')
                                    }
                                }
                            }
                        } 
                    }

                    parallel parallelStages
                }
            }
        }
    }
}

So the code presented above locates all of the project directories, creates an array of stages where each stage builds a different project, and then runs all stages in parallel.

This means that without updating the Jenkinsfile each time we have a new project, all projects will be built in parallel.

Dynamically Distributing Jenkins Builds Across Machines

We opened this article with Jenkins parallel builds, but we also did mention that the Jenkins pipeline plugin exposed many new capabilities, and one of them was running various stages on various nodes.

This means that one stage could run on a certain machine, and another stage of the same pipeline could run on another machine. Thus having a Jenkins distributed build and not just a Jenkins parallel build.

This fact comes in handy when we want to build multiple projects in parallel since it allows us to distribute the load of building each project to a separate machine, and thus get faster build times, or even meet specific criteria for the build.

This can be easily achieved by using the agent directive.

Let’s elaborate on the previous example by adding the agent notation.

In order to demonstrate using a simple example, we’ll use the master agent for all stages.

def parallelStages = [:]
def projectsToBuild = []
def chosenAgent = “master”

pipeline { 
    agent any 
    stages { 
        stage("Compile & Build Binary") { 
            steps {
                script {
                    // Find directories (for simplicity’s sake, all directories)
                    def files = findFiles()
                    files.each { f ->
                        if (f.directory) {
                            projectsToBuild.add(f.name)
                        }
                    }                    

                    projectsToBuild.each { p ->
                        parallelStages[p] = {
                            node(chosenAgent) {
                                dir(p) {
                                    stage(p) {
                                        sh('make && make build')
                                    }
                                }
                            }
                        } 
                    }

                    parallel parallelStages
                }
            }
        }
    }
}

The example above demonstrates how choosing an agent for the builds to run on can be dynamic.

The fact that each stage’s node can run on a different machine based on a variable passed to it, really makes the possibilities endless.

New Capabilities With Dynamic Parallelization

Now that we have introduced the logic to dynamically run stages in parallel, we can do all sorts of new things to tackle other challenges.

For example, let’s say we don’t want to build all of the projects every time a change is introduced, because one of them takes much more time than the others, so we want to avoid building it if possible.

All that’s required from us now is to make sure the projectsToBuild list contains only the projects that were modified during the work on the current branch.

In fact, we can now introduce any set of conditions and rules to dynamically build the list of projects we want, and based on it a set of stages that will run in parallel will be generated.

Jenkins Parallel Builds – The End Result

By introducing Groovy into the equation, we can now dynamically compose declarative pipelines, and this way be much more flexible with building our projects, and make the process of building multiple projects much faster.

It’s important to note that Groovy can be used in Jenkins to solve a wide array of challenges, while parallelization is just one field that benefits from it.

What doesn’t the dynamic parallelization solve?

By running multiple projects builds in parallel, we make building all projects faster as a whole, but we don’t make each project build faster. The ability to build the project faster, specifically for C++ (but not only), is based on the ability to break the build steps and parallelize them, which is not achieved with Jenkins parallel builds, nor with Jenkins distributed builds.

The reason is that whenever we parallelize stages in Jenkins, we can either run multiple stages in parallel on the same node or on multiple nodes (as demonstrated above), but we don’t break down the build command itself into multiple parallelized processes.

In the examples provided in this article, the build command used was ‘make’ to build the C++ projects.

If we want to speed up each ‘make’ command, we could increase the resources of the node it’s running on, or allocate the job to a less busy node, but we couldn’t break down the ‘make’ command itself using the ‘parallel’ directive.

In other words, if a project takes 45 minutes to build, it will still take 45 minutes to build when it runs in parallel to another project.

So while parallelizing various builds helps speed up the CI, speeding up each build is still a challenge that requires a different approach.

Here, tools such as Incredibuild may help parallelize a single build’s multiple stages.

As your project grows and your build time gets longer, allowing fast development iterations relies on being able to build your project in a reasonable amount of time, and so we are required to speed up the build process even more.

Accelerate your Jenkins CI/CD pipelines

To sum up, parallelizing the various stages of a Jenkins pipeline – with Jenkins parallel builds and Jenkins distributed builds – may really come in handy, but it won’t make each individual build faster.

For that, we’ll need the help of other tools such as Incredibuild.

These solutions can also be used together, so for example you could run multiple builds in parallel, and parallelize the compilation stages of every single build as well.

 

 

Frequently asked questions about Jenkins CI/CD

What is CI/CD and Jenkins?

CI/CD stands for continuous integration and continuous development. It is a methodology that allows teams to work on always-live apps without downtime and with more effective cross-team collaboration. Jenkins is a tool that helps teams manage their CI/CD pipelines and provide greater control over the operational process.

What are the stages of Jenkins CI/CD?

The four stages of a Jenkins CI/CD pipeline are checkout, build, test, and deploy. The first is used to find the source code, build is when it is compiled, testing occurs to find bugs and potential errors, and deploy is when new code is released into production.

What is Jenkins used for?

Jenkins is a CI/CD tool used to manage and control several steps of the software release process, ranging from building to documentation and even testing.

Is Jenkins still relevant in 2024?

Absolutely. Jenkins is still one of the most popular tools for CI/CD pipelines on the market, and it remains in use by thousands of development teams across industries.

Joseph Sibony
Joseph Sibony reading time: 10 minutes minutes July 31, 2024
July 31, 2024

Table of Contents

Related Posts

10 minutes GitLab vs GitHub – A 2024 Comparison

Read More  

10 minutes An Internal Development Platform: What it is, and how it helps your devs and boosts your business

Read More  

10 minutes Platform Engineering vs DevOps: A Comprehensive Comparison

Read More