Here’s a pretty cold take: we’re running low on chips, and it’s a big problem for a lot of reasons. It’s not a new issue, but it has become a major trend in most tech sectors – after all, semiconductors form the backbone of our technology infrastructure. Moreover, as demand for more tech-based products – everything from computers to video game consoles to embedded systems – increases, a shortage of chips means that most of that demand will go unmet for another year, at least (according to some estimates).
That’s bad enough on the consumer side, where manufacturers and tech companies have been scrambling to fill their needs to match. The report linked above notes that the COVID pandemic pushed PC sales up by 50% year-over-year in 2021, a trend in demand that is predicted to sustain at least until the end of 2022.
For consumers, this means there might be shortages for some of the goods they want, but for organizations that need to build data centers, compute farms, and that need more processing power in general, it means having to find other ways to get the same results. The question is, how do you do it?
The chips are down
Let’s drop another unsurprising fact: codebases today are pretty large. Like, hundreds of millions of lines and growing. Moreover, they’re growing pretty quickly. In fact, a recent survey found that codebases, in general, are expected to grow by almost 19% in 2022 alone. This might not seem like a big issue for consumers – we don’t usually see the backend of most apps. But when it comes to languages like C++, massive codebases mean massive build times, and a critical need for computing power and available processing resources.
Why would this matter to our conversation about semiconductor shortages? Well, for a few reasons. Simply in terms of volume, there’s no quick end to the scarcity in sight. According to a US Department of Commerce survey, at the worst point of the shortage in 2021, most semiconductor producers had only five days’ inventory (down from 40 days in 2019). Although many analysts have an optimistic view of an end to the shortage (as evidenced by this Gartner report) we’re still not quite out of the woods yet. New shocks to the supply chain are still extending lead times for semiconductors (across all types).
This shortage is being felt most severely in a few industries (automotive and medical devices, for example, have taken severe hits due to a lack of chips and microcontrollers), but the issue of chips for computers is an issue for developers and teams that manage large codebases either on-prem or on hybrid models that use cloud bursting or other cloud models. It’s not a secret that large builds require significant processing power, and most organizations today have massive codebases that need a lot of computing capability simply to compile for testing.
In the past, a standard solution would have been to simply get “better” computers with more processing cores, or to add new servers to a local data center. That’s not really an affordable option these days. It’s true that semiconductor revenues are up so far this year, but the reason is clear –- and bad news for devs. According to Gartner, the growth is largely driven by higher average selling prices.
Let’s say you have a hugely popular game that you’re constantly updating and maintaining. This means you’ll always need to run new builds (even if they’re incremental) and compiling this could take hours. So what, you think. We’ve got money to spend, we can buy ourselves tons of shiny new servers and computers to handle these processing needs. Well, you could, except they’re currently much pricier than you thought – because there’s a semiconductor shortage.
Not to worry, you say resourcefully, we’ll just migrate to the cloud and avoid the need for hardware altogether. Sure, except cloud costs can pile up pretty quickly if you’re just spinning up more and more compute capacity with no rhyme or reason. So, what is a dev team to do with such a massive codebase and a serious need to compile it and run multiple builds per day?
The answer – if we may indulge in cliché — is to “work smarter, not harder”. But let us unpack what that means a little. Instead of simply paying more for more hardware, it’s critical to think of ways around a scarcity that only results in higher prices and might not end any time soon. Thus, it’s time to think of getting more out of the resources you do have.
Bursting to the cloud, for example, or simply migrating your builds to the cloud, could result in significant savings – if you do it right. You could use spot instances, for example, to cut down on costs and avoid the long-term financial woes that come with poor management of cloud resources. Having tools that can orchestrate spot fleets to ensure that you’re never left without service because your spot instance was removed can help you cut down both on costs and on the need to have on-demand server capacity and compute resources that you don’t always need.
Even if you’re working on-prem, acceleration tools can give you the same (or even better results) as buying more hardware, with just a fraction of the costs. The right dev acceleration platform can maximize the resources you have on hand (the cores and CPUs your organization’s computers already have) and turn them into a ready-to-use build farm that can give you even faster performance than if you had a dedicated build farm’s worth of physical servers.
The article we just linked shows you what acceleration looks like. Using ten machines across a virtual grid, the project used 112 cores (imagine the cost of that many individual processors if you were to buy that much hardware) to complete a build that originally took 16 minutes in less than two minutes – a 9.5x performance boost. What does that mean, in practical terms? Lots more time to iterate, test, improve, get to market faster, and get home at a reasonable hour.
Don’t wait, accelerate
Sure, the chip shortage will end – sooner than later – but you’ll still be faced with the same question: is it worth it to buy more hardware or have more on-demand instances? Or would you better invest that money into acceleration tools that can do more with less, faster? Even if there were thousands of chips available at your fingertips, hardware spending is so 2000. These days, it’s all about how you can work smarter.