Optimizing Automated Build Performance

Joseph Sibony
Joseph Sibony reading time: 5 minutes
May 18, 2021

It does not get much worse than staring at a long-running step in your automated build process. With all of the compute resources available to us in this day and age, there really is no reason we should be dealing with poor build performance. A closer look at build optimization throughout your automated build pipelines may surface small changes that add up to big time savings.

Scale Up or Scale Out?

That is the usual question we ask ourselves when trying to increase our automated build performance. Do you scale up or scale out to increase the available resources for a build process? Furthermore, is the cost for those additional resources worth the time saved over the life of the build agent? These are all valid considerations.

By scaling up, you may be able to lessen build times significantly thanks to the additional CPU at your disposal. Scaling out may help you run more builds in parallel, thereby unblocking an automated build pipeline clogged with slow compile steps. It pays to think outside the box when determining ways to optimize build performance.

Some engineers take advantage of build agents that may be oversized for the task they are used for. They do so by running more than one instance of the agent service. This method of increasing available agents also ensures they have the same build capabilities.

Be More Independent

Automated build - dependencies

Restoring dependencies can oftentimes give way to a stymied build step. This is especially true when the dependency is downloaded from a 3rd party. There are times when a build agent may not be able to resolve these components. For example, data center outages, network congestion, or a security constraint may restrict immediate access.

Take nuget packages, for instance. Many of today’s applications have some sort of dependency on components that are retrieved during a nuget restore command. Rather than reaching out externally, build agents can cache the various packages. This decreases the likelihood of a build’s eventual failure due to not being able to reach the repository. It decreases the time needed to complete the overall build process thanks to those packages being locally available.

Be More Dynamic

There are other options that help with scale as well as provide a build agent with pre-installed tools. Using a dynamic pool of build servers allows you to spin environments up for use during compilation and release without those resources taking up compute time. In the case of a build server used in a cloud, that could equal out thousands of dollars added back to the cloud budget.

Dynamic build server creation is accomplished in a number of ways. In Azure, ARM templates can help ensure the right OS, and sizing is set in a programmatic way. Similar images exist for other cloud services as well as VMware through machine image templates. Those finding themselves in a multi-cloud automated build situation may benefit from looking into Terraform for automating dynamic infrastructure.

Finally, using Docker containers along with a Docker Registry would allow for even more efficient use of existing resources. Think of these containers as small, individual build servers with everything needed to complete the tasks. Consider the case of an automated build task that restores npm files. Rather than depend on retrieving these on each build, they can be “cached” within the image.

Related:
Docker vs VM
Docker vs Kubernetes – Should We Really Compare?

Idle CPUs Are the Agents’ Playground

When you take into consideration all of the pre-existing infrastructure available in a business environment, your options expand. All of the workstations that would typically be sitting idle can be called into action to help with optimizing automated build performance. As long as there isn’t too much overhead, low-usage servers can be dual-purposed to run as an additional agent.

The principle behind repurposing hardware may not be new, but how we apply it as automation engineers is undoubtedly cutting-edge. By harvesting all of the idle cores available on-premise and in the cloud, workloads can be distributed in a way that ensures a “digital team effort.” Each available node can apply additional CPU to builds containing multiple, concurrent processes. Meaning each node can call on the resources of others to run workloads on hundreds of cores. In essence, creating a virtual super-computer for your automated builds.

This method of distributing tasks greatly increases efficiency. However, it can be extremely cumbersome to set up and maintain. Keeping track of automated build capacity is one part knowledge and another part guesswork. Products that help orchestrate this complicated process make important decisions for you.

This is where a system like Incredibuild’s Virtualized Distributed Processing™ shows its true value. Along with compilations, testing cycles, and acceleration of CI time-consuming activities, there is assuredness that security is maintained throughout the processes as if everything was running locally. Simply executed with a lightweight agent that supports existing and future build requirements. All without the need to change build scripts, documented processes, or your existing toolchain. Read more here.

Ninja Theory Gives Way to Unreal Facts

An impressive example of how distributed builds can save valuable development time by drastically optimize automated build performance is with the company “Ninja Theory.” This gaming studio has produced some of the industry’s critically acclaimed games by using the Unreal gaming engine. This comes at a disadvantage due to inherent issues with build times using Unreal Engine 4.

Taking this into account, something needed to be done to support the team of 30 developers so they could spend more time creating and less time compiling. In addition to the build servers in their environment, the distributed methodology of sharing the load truly paid off.

Their automated build pipelines went from 56 minutes on a per-machine basis to only 8% when distributed. A staggering 90% decrease in time just by using existing resources more efficiently. You can read more here about how using distributed processing helped Ninja Theory slash their build times.

With promising statistics like that, it is easy to see how tapping into this technology will be more prevalent in the future. More companies will recognize the unused resources at their disposal and use them to their advantage. Developers will appreciate the additional time to innovate instead of watching builds churn.

Read more about this technology to see how it can have a staggering effect on optimizing automated build performance for your team.

pipelines

Joseph Sibony
Joseph Sibony reading time: 5 minutes minutes May 18, 2021
May 18, 2021

Table of Contents

Related Posts

5 minutes Measuring the Value of Development Acceleration

Read More  

5 minutes Platform Engineering vs DevOps: A Comprehensive Comparison

Read More  

5 minutes 8 Reasons Why You Need Build Observability

Read More