Today’s release managers are in a constant race to release builds. While this has always been an issue, it’s even a bigger issue in these days of agile and CI/CD development approaches. Tech giants such as Facebook, Amazon, Netflix, Google are releasing at an unimaginable pace – thousands of times a day. Amazon, especially, is known for nailing it with new production deployment every 11.6 seconds.
Release managers are facing frequent release cycles, more so than ever before, while being measured on delivering quality releases on time. Under such pressure, creating an infrastructure that supports the ability to constantly develop, test, release and deploy is of the essence. A well-defined infrastructure that supports this goal is an art that should be mastered by release managers. It entails laying down the appropriate tools and processes to get the job done on time while maintaining the product’s quality.
In this infrastructure puzzle, reducing build time is an important piece that shouldn’t be overlooked.
Yup, build time is the thorn in release managers’ side. Why? While the obvious reason is that slow builds pretty much suck is time, there are other reasons for why release managers dread slow builds:
Reason #1: “Who. Broke. The. Build???
In a utopian world, builds never break. In our world they do, and quite often. How often? The number varies between organizations, but Eitan Schichmanter, Senior DevOps Architect and Leader at eBay estimates that “15 to 30 percent of builds on the release branch fail for various reasons”.
And guess who’s responsible for locating the reason why the build broke and having to fix it? That’s right, the release manager is also a kindergarten teacher playing hide and seek with the development team. Let’s say you have a build with 20 commits. That’s 20 changes to go through in search of that loose cannon. That’s a lot of time wasted, not to mention the headache…
So what do slow builds have to do with it? Well, if your builds are slow, you can’t really implement that desired technique of build-per-commit, meaning you accumulate a lot (a lot!) of commits before running a build. However, if your builds are fast, you can afford running a build-per-commit, and knowing exactly who broke the build at any given time. Actually, you don’t even need to know; the automation will shoot the proper alert to the developer who broke the build, leaving you to focus on optimizing the DevOps processes. Another thing with build-per-commit is that by definition, it enables practicing gated check-ins to maintain a clean build, which brings us to my next point…
Reason #2: You dirty, dirty, build!
Let’s face it, we like it clean. Having a clean build means having a build you can use. This is crucial when you’re pressed on time and have a release deadline. Imagine the following scenario (although it probably happened to you, so no imagining necessary):
Developer X fixed a bug that support would like to share with an important customer, but as developer Y broke the main branch, it can’t be shared with the customers until the problem is fixed. You don’t have a clean build to release. Also, QA can’t start testing the new version because it failed to build, so the process drags on.
With build-per-commit, however, you have the luxury of having a clean build at all times, just waiting to be released.
Reason #3: Slow builds cause release managers to compromise on quality
Everybody says they offer top-notch quality. But the truth is that nothing is perfect or flawless. Even aspiring for perfection can prove to be very time-consuming. At the end of the day, when pressed in time, most of us compromise on ‘good enough’. In the software development biz, running fewer tests and code analysis processes because they make the build longer, means compromising on quality. Slow builds can cause teams to rely less and less on automated test suites, writing fewer tests, running tests less frequently and narrowing the scope of executed tests. Cutting down on code analysis processes means your code won’t get flagged on quality issues such as complexity, dependencies, code metrics, etc.
This issue is discussed to a greater extent in this 2019 Guide to faster Continuous Integration builds. The bottom line is: do you want to settle for ‘good enough’ (hopefully) or worse, risk your product turning out ‘bad’?
Reason #4: Slow builds might prevent you from growing
Have you ever asked yourself how your future builds will look like? If you haven’t, you should start asking it now. It is only reasonable to assume that in the future, your code will get bigger. In addition, you’ll want to add additional open-source or commercial 3rd party libraries, as well as expand your test coverage. After all, that’s a part of growing. But if you’re already suffering from slow builds, it’s only natural that you would want to avoid doing anything to add to that build time. You’ll find yourself saying “no” to processes and tools that could make your product better, just because they prolong your build time. That’s bad practice, and eventually, you’ll be left behind because of it.
Reason #5: In peak time, slow builds are a disaster waiting to happen
Even if you’re not bothered by slow builds on a daily basis (just for the sake of argument), can you honestly say that that’s the case during peak times – right before a version release or when you’re handling a hotfix?
Take the gaming industry, for example. From my experience, when the holidays come knocking, builds can’t take the heat, and release managers are left with what they describe as a disaster: not delivering.
O.K… So, how do I speed up slow builds?
Amazingly enough, reducing your build time is not so hard to achieve as you would imagine.
Sure, there are techniques to reduce your build time that require quite a bit of effort (such as using PCH), decrease dependencies and so on.
However, there is also a way to reduce your build time dramatically (much more so than these techniques above) without changing your code or purchasing additional hardware. It has to do with implementing a distributed processing solution. A distributed processing solution simultaneously runs all those time-consuming software development processes such as builds, across multiple machines within the local network. This parallelism allows the transformation of every machine into a multi-core virtual HPC device that utilizes the CPU power of other machines already owned by the user. Simply put, by taking advantage of the CPU power of other computers/machines in your network, you can execute builds and other time-consuming tasks faster. A lot faster.
Walking the walk
It amazes me that we now have self-driving cars and 3D printers that print pizza (yuck!), but a lot of release managers still struggle with slow builds. When it comes to rapid releases and CI cycles, they talk the talk but they don’t always manage to walk the walk. And it’s not a long walk to walk. All it takes is understanding that slow builds are a fundamental hazard that should be avoided, and doing something about it.