How We Made CI 85% Faster – A Dogfooding Story

Made CI 85% Faster

David Mark, VP Platform Engineering and Innovation

Reading time: 

4 minutes

Golden Principle: Fast CI Feedback Loops, Happy Developers

At Incredibuild, we build the tools that accelerate software development, so we hold ourselves to the highest standard. This is the principle of dogfooding – using our own tools on our most demanding projects.

Our engineering team runs intensive CI pipelines for our core products, including the Linux Agents and the Build Cache. Each pipeline involves C++ compilation and static analysis with Clang-Tidy. These critical jobs run 100s of times a day.

We had a clear goal: drastically improve our internal Developer Experience (DX) by making CI pipelines faster and cheaper.

The Challenge: Slow Linting, High Costs

Our CI system was trapped by long builds and high costs. The baseline used dedicated, persistent CI runners on AWS, relying on a local caching strategy that demanded the runners remain active 24×7 just to keep the cache warm. This forced us to pay a fixed, high cost for idle compute. 

Furthermore, our pipeline involved two major parallel jobs – the C++ compilation (already accelerated with our Incredibuild for C++) and the deep Clang-Tidy linting. Even running concurrently, Clang-Tidy became the critical path, with the overall stage duration peaking at a painful 24 minutes per run, severely blocking developer velocity

The Innovation: Build Cache for Clang-Tidy and Ephemeral Compute

Rather than just reconfiguring our pipeline, we used our own technology to solve the problem. Our analysis showed that while the C++ compilation was benefiting from our existing distributed tools and caching mechanism, the intensive Clang-Tidy linting was the critical bottleneck, even when running in parallel.

This realization drove us to invest in a new capability, which we immediately tested on ourselves:

1. Customizing Our Build Cache to Support Clang-Tidy

Clang-Tidy checks every single file, even unchanged ones, leading to excessive build times. Our Build Cache was designed to be highly extensible from the beginning, making it easy to adapt to new use cases. We have decided to leverage this foundation to build direct support for Clang-Tidy.

  • The Breakthrough: By enabling this Build Cache for Clang-Tidy, we could cache the linting results themselves. This is all done automatically using our instrumentation technology: if inputs like source files, command arguments, and environment variables haven’t changed, the expensive linting step is entirely skipped, and the task’s artifact is instantly brought from the cache instead.
  • Dogfooding Success: We tested this feature rigorously on our hundreds of daily CI runs, proving its stability and performance before releasing it to the public.

2. Migrating to Remote Shared Caching

With the Clang-Tidy bottleneck solved via our new Build Cache feature, we could finally tackle the cost of the underlying infrastructure. We flipped our infrastructure strategy from a wasteful, distributed-cache model to a lean, centralized one:

  • Zero-Waste Compute: We scrapped the fleet of expensive 24×7 dedicated runners.
  • The New Cache Hub: We consolidated cache availability onto a remote shared cache with only 100GB EBS storage. This acts as the central, persistent remote cache service.
  • Ephemeral Agents: All our actual build work is now executed by cheap and ephemeral EC2 instances. These agents spin up, connect to the centralized remote cache, run the accelerated C++ and Clang-Tidy jobs, and are immediately torn down.

The Results: Fast Builds, Low Costs and Happy Developers

This infrastructure shift delivered massive wins for both our budget and our engineering team’s satisfaction.

1. CI Pipeline Slashed by 85%

The most dramatic change for developers is the speed of feedback. We cut the maximum duration of our critical CI stages from 24 to 4 minutes, translating the time saved directly into development velocity

2. Total Cost Reduction: ~80% Overall

Our cost saving comes from two distinct sources

  • Infrastructure Savings (85% reduction): Switching from costly 24×7 runners to ephemeral agents backed by a centralized remote caching.
  • Time Savings (80% reduction): Our Build Cache for Clang-Tidy eliminates 20 of 24 minutes on hundreds of daily jobs, freeing developers to focus on engineering tasks rather than waiting for CI

The Takeaway

We are committed to giving our engineers back their most precious commodity: time. By solving our own hard problems with our own technology, we ensure our development process is the happiest and most efficient possible.


Want to slash your CI time and cost like we did? Our Incredibuild Build Cache is highly extensible – just like our Clang-Tidy solution, it can handle Linux, Windows, and AOSP build acceleration, plus tasks like compression, encoding, code signing, packaging, and more. You can try the same technology we use internally!


Try Incredibuild Build Cachehttps://www.incredibuild.com/product/build-cache

David Mark,
VP Platform Engineering and Innovation at Incredibuild