Speed up Your Builds by Parallelizing

Ori Hoch
Ori Hoch / May 24 2021
Speed up Your Builds by Parallelizing

In a previous blog post, Renana Dar wrote about “why are slow builds a release manager’s worst nightmare“, in this post I will dive into one possible solution which was also mentioned in that previous post – parallelization.

I will review and show usage examples for parallelization solutions that are independent of your CI system. This can provide quick solutions which are easy to implement regardless of your build system. I will focus on common tools for the Linux environment which are most likely already available in your build environment: Bash, Python, and GNU Parallel.

These solutions are relatively easy to implement and require minimal dependencies but have their limitations which I will highlight below. In the next post, I will show how to overcome these limitations using features available in some of the popular CI systems.

Why Do We Want to Parallelize?

The need to utilize the machine’s power, e.g. use all its cores, became important as machines became more and more powerful. In recent years there was a move towards running several instances (e.g. dockers) on the same machine, and utilizing the resources means using those instances efficiently, when relevant (though not necessarily in builds). This is crucial when there is a bottleneck task (e.g. build is a bottleneck for testing) that prevents other tasks from utilizing the resources before this task is finished. In this case, we should make better use of all resources to finish the delaying task, and if possible do it by dividing the task into parallel sub-tasks.

What Do We Want to Parallelize?

Before we start coding, let’s understand what it is that we want from the parallelization solutions. The most simple use case is the case of a Makefile which comprises several tasks which can be parallelized. For example, each task produces a different binary. More advanced parallelization requirements might include dependencies between tasks, or variable number of tasks depending on output of previous tasks. All of these use-cases are supported by the following solutions with varying degrees of complexity, ease of use, and maintainability.

Quick and dirty parallelization using Bash

Most build processes start and end in Bash, so that’s the natural place to start parallelizing. The most simple way to parallelize in Bash is using the “&” suffix. Just append & after your command and it will run in the background allowing you to run other tasks. Once tasks are running in the background you also need to wait for them to complete. This is done by the aptly named command “wait”.

The following example script runs 3 make tasks in parallel and waits for all of them to complete:

make task_a &
make task_b &
make task_c &
wait

This works fine for simple scenarios where you know all your tasks in advance, you can also use a loop to go over a list of files:

for FILE in `ls`; do
  do_something $FILE &
done
wait

Or, if you have a list of tasks in a file, one task per line:

while read TASK; do
  $TASK &
done < $FILE
wait

While the above examples can be combined into more complex scripts, if you have more advanced use-cases than the above, I recommend using the next solutions.

Advanced Parallelization Using Python

Python is a common language for DevOps tasks which is usually available out of the box in your CI environment. The Python multiprocessing module provides the required capabilities. It is a bit harder to use than the previous Bash examples but provides much greater flexibility. The multiprocessing Pool class allows distributing work across a set of defined processes (the number of available CPU cores by default).

The following example can run any number of tasks defined in a list over all available CPU cores:

from multiprocessing import Pool
from subprocess import check_call

scripts_to_run = [
  "make task_a",
  "make task_b",
  "make task_c"
  # add more tasks here
]

def run_script(script):
  check_call(script, shell=True)

with Pool() as pool:
  pool.map(run_script, scripts_to_run)

In the previous example, we used the subprocess.check_call function to execute each script, this will raise an exception if any script fails which will, in turn, cause the entire script to fail. This is suitable for many build scripts where you want to ensure all the tasks complete successfully.

The following example shows a slightly more complex script that shows the status of each task and allows to conditionally continue even if some task failed:

from subprocess import call

scripts_to_run = {
  "a": "make task_a",
  "b": "make task_b",
  "c": "make task_c",
  # add more tasks here
}

def run_script(name_script):
  name, script = name_script
  return name, call(script, shell=True) == 0

with Pool() as pool:
  results = dict(pool.map(run_script, scripts_to_run.items()))

print(results)

The results variable will contain a dict with the key being the script name (“a” / “b” / “c”) and the value being True / False – whether the task succeeded or not.

The above example can easily be extended to run different types of tasks and allow for more advanced scenarios. Consult the very comprehensive Python multiprocessing documentation for more available features.

Advanced Parallelization Using GNU Parallel

While the previous examples can allow you to run any possible parallelization workload, it does require quite a bit of code. GNU Parallel is a very complex and feature-rich command-line program that brings the full power of parallelization to the command line with short and easy-to-read options. It is quite complex and has many options. I suggest reading the GNU Parallel tutorial to get to know it’s full capabilities.

GNU Parallel can be installed using your package manager (e.g. `sudo apt-get install parallel` if you are using Ubuntu). Same as in Python, GNU Parallel will distribute the work across all available CPU cores by default. Let’s see some examples.

The following example is the most basic use-case of running 3 tasks in parallel across all available CPU cores and waiting for them to complete:

parallel ::: "make task_a" "make task_b" "make task_c"

We can slightly improve on the above example, and use replacement strings, this produces the same result as the previous example but with slightly shorter code and less repetition:

parallel make {} ::: "task_a" "task_b" "task_c"

Consult the GNU Parallel tutorial replacement strings section for more advanced possibilities.

GNU Parallel can also get it’s input from the standard input allowing to work on a variable number of tasks

Do some parallel work on a list of files:

ls | parallel do_something_with_file {}

Do some work on a list of make tasks defined in a file (one task per line):

cat make_tasks.txt | parallel make {}

The exit code reports how many errors were encountered, for example, the following command’s exit code will be 2 because there are 2 failure exit code in the commands:

parallel eval {} ::: false true false

GNU Parallel can also output the result of each task to files, but this is out of scope for this post, see GNU Parallel tutorial Controlling the output section

Another very useful feature in GNU Parallel is the ability to run tasks on remote machines using SSH. Assuming you have SSH hosts configured in your .ssh/config file, so that you can SSH by simply running `ssh HOSTNAME`, you can use the following example to distribute work across multiple servers:

parallel -S HOSTNAME1,HOSTNAME2,HOSTNAME3 make ::: "task_a" "task_b" "task_c"

It’s your responsibility to copy the relevant sources to all the servers and aggregate the results from each server (usually using scp)

Limitations of the Reviewed Solutions

Limitations

When you start having more complex use-cases, you will quickly find the limits of the above solutions. While any limitation can be overcome, it will usually require some more code to develop or adding additional dependencies.

Following is a list of common limitations:

  • Error reporting and alerting – you want to know when each task failed, why and be alerted accordingly about it.
  • Aggregation of tasks input / output – if the tasks have complex input / output, you will need to write code that aggregates or parses it.
  • Using SSH for running on remote machines requires some setup and is limited to hostnames which are known in advance. You may want automated hostname discovery which distributes workloads across any number of remote machines.

If you think you may encounter these limitations you should consider using the parallelization capabilities of your CI system. I will review some popular CI systems in the next post.

It is to be noted that the capabilities of a CI system (such as Jenkins, Bamboo etc.) do solve some of the limitations above. However, while there are things that are easy to parallel, C++ build is more complicated. Usually, a CI pipeline that sees a dependency between project B and Project A, will run the compilations of A, followed by a link and then compilations of B followed by B link. Although this is correct in terms of dependencies, it’s not optimized in terms of utilizing the computing power as in most cases only the link of B depends on the link of A, while the compilation processes of A and B can be executed independently and therefore in parallel. Optimizing such inefficiencies is not something that is easily done by a CI/CD system, which does not deal with the business logic of the build execution in order to optimize such use-cases, for which you’ll need something that tightly integrates with C++ build tools, such as Incredibuild.

One needs to be aware that when applying any parallelization technique, you can suddenly experience the release of the parallelization bottleneck, as thousands of tasks are available to be executed in parallel (especially in scenarios such as C++ compilations, testing, and more). Once you’ve reached these numbers, it is recommended to consider breaking out from the boundaries of the local host resources using solutions such as distributed computing which can harness idle CPUs across your on-prem network, or public cloud, utilizing your hosts to become a super-computer with hundreds or thousands of cores.

Summary

The importance of parallelization nowadays, in almost any environment, is clear, to keep up with computers’ multi-core architecture and achieve the anticipated performance gains. In this post we reviewed a few basic approaches for parallelism. As many technical challenges, parallelism can be easy for simple tasks and very complicated for complex, tightly coupled tasks. Two important points that were raised: (a) parallelism of C++ builds raise their own challenges; and (b) parallelism in the boundaries of your own machine is limited, in many cases you would have huge benefit by releasing your parallelism to the on-prem or public cloud.

In our next post, we would discuss Parallel CI – Parallelism in CI Systems.

speed up c++

Stay informed!

Subscribe to receive our incredibly exclusive content

Ori Hoch

Ori is a DevOps consultant with over 15 years of experience in a variety of technologies on projects ranging from small start-ups to larger companies. Ori specializes in helping teams implement DevOps methodologies, CI/CD and automation systems as well as specializing in Kubernetes and cloud-native systems. Ori is a long-time activist and contributor to open data and open source projects, you should check out his GitHub profile: https://github.com/OriHoch