Specializing vs. Scaling Infrastructure for a Faster Release Cycle and Testing? Why Not Both?

Faster release cycle

Joseph Sibony

reading time: 

5 minutes

It’s no secret that time is not the ally of the software development business. Quite the opposite. Software companies rise and fall on their ability to ship valuable features quickly. An agile release cycle reduces time-to-market, keeps you moving faster than the competition, and enhances product quality. It also ensures market share – which drives an endless loop of a constantly-faster release cycle.

At the same time, writing high-quality code is time-consuming. The adage “you can either do it well or fast” still holds true. Low-quality products gain even less market share than late-to-market products – and a release cycle with mishaps eventually needs expensive fixes.

To resolve the speed-quality paradox, DevOps stakeholders generally rely on either specializing (builds, workflows, or pipelines) or scaling infrastructure. What do I mean by each? In this post, I’ll drill down into how each applies to testing, and how I think a hybrid approach could better serve testing stakeholders and the pipeline as a whole.

What IS “Specializing” in Testing?

Testing refinement methodologies are a great example of what I call “specializing”.

To lower the overall testing burden in a shift left scenario, it’s common to manually examine all the tests being run and find out which fail more frequently. Then, we build a subset of tests, favoring the failure-intensive group, for developers to run before committing their code. The hope here is that developers will find test failures before committing their code, allowing them to “stay in the zone” instead of waiting for the CI\CD build to notify them of test failure. This method also greatly reduces the failure ratio of CI\CD builds, meaning a shorter build queue and huge time savings for release managers in determining “who broke the build.”

The problem is that this specialization – favoring one type of test over another – creates an artificial feedback loop. It is possible that these tests will catch 20% of the build failures being experienced, BUT only those failures that are covered by the scope of these particular tests.

And specialization is as labor-intensive as it is limited in efficacy. To choose the subset of tests, you have to review and understand all your tests, choose those that fail frequently, then continuously revise this set as your software evolves. Specialization breeds specialization – the cycle never ends and the overhead never shrinks. Neither does the compromise – because at the end of the day, with specialization alone you can only focus on a subset of tests. That leaves a lot untouched and untested.

What IS “Scaling Infrastructure” in Testing?

Scaling for a faster release cycle

Another approach is what I call “scaling infrastructure”. This includes scaling to the cloud, hardware expansion, distributed systems, avoidance and caching technologies, automation, and more.

When we scale infrastructure there’s a less immediate need for specialization. Providing developers with hundreds of cores – whether via hardware upgrades/expansions or distributed computing – allows them to run full test suites prior to committing their code, instead of compromising on a specialized subset.

This, of course, enables them to detect more errors, get faster feedback, and maintain context. And it’s good for any type of testing –  unit tests, integration tests, API tests, regression testing, soak testing, load testing, and more. This type of testing automation supports more frequent release cycles, competitiveness, higher quality, greater productivity, and – the holy grail – time-to-market.

Scaling infrastructure is a strategic move. Yet every strategy comprises tactics – and this is where the value of specialization comes to play. There is unquestionably something to be learned from specialization, even in light of the viability of scaling infrastructure and testing automation.

The Best of Both Worlds: A Hybrid Approach for a Faster Release Cycle

When we view development optimization from a holistic perspective, it becomes clear that a hybrid approach makes sense. For testing, scaling infrastructure is a more expedient solution to quickly refine testing methodologies – testing more and better. It is a strategic shift towards greater efficiency in testing, and also towards a future where more testing and more test types will become standard.

Scaling infrastructure first enables testing stakeholders to more effectively focus efforts thereafter. It’s also a one-time expense and teaches us what scales well and what doesn’t. Then, once we’ve scaled up infrastructure, we use what we’ve achieved and learned to better focus specialization efforts.

For example, once you’ve scaled infrastructure, it’s possible that you’ll find that a subset of your tests is running sequentially due to dependencies and won’t scale by adding more compute power. For this subset, you’d choose to specialize and remove unnecessary dependencies in order to benefit from the scaling infrastructure you’ve deployed.

The Bottom Line

When shipping valuable features F-A-S-T is a dealbreaker, development teams need to look for new ways to shorten the release cycle and speed time to market. And sometimes, the new ways are a mixture of known ways – solutions are not always black and white. A hybrid approach – the right mix of specialization and scaling infrastructure – is a great example of strategic innovation and tactical sensibility.

pipelines