Cloud Development over the Next 10 Years

Joseph Sibony
Joseph Sibony reading time: 5 minutes
April 25, 2022

Cloud Development

Cloud Development has become increasingly complex over the years. It is now imperative that developers understand various cloud technologies, including APIs, Infrastructure as Code software such as Terraform and Pulumi, as well as the lifecycle of resources in the cloud. Over time, engineering organizations have invested more and more resources in managing the cloud.

Our own research found that 62% of engineering capacity in mid-sized companies and fast-growing startups is dedicated to feature development, and continues to drop. That means that only 6 out of every 10 engineers are creating and improving customer and user-facing features, and the trend is not heading in the right direction. Based on multiple large public surveys, we also found both Devs and Ops teams felt that complexity was their biggest challenge, with a 17% increase from 2008. The ecosystem has built five years’ worth of tech, and things got harder.

Looking at Kubernetes, the defacto cloud/containers platform companies are adopting, a 2021 study by Humanitec shows that the majority of their 1800+ respondents underestimated the complexity of Kubernetes, causing problems for newer companies following the current industry trends.

(based on the Humanitec 2022 whitepaper)

At the heart of these issues is the rise of microservices – and it’s not what most of us were promised.

Microservices

Historically, legacy monolithic applications (monoliths) were deployed and managed as one large unit using tools like Chef and Puppet.

On the other hand, microservices make it easier to have fault isolation, independent deployments, custom environments for each service, modular code, and better team boundaries.

Implementing this modern take leads to developing and operating tens, hundreds, and sometimes thousands of small pieces – so tools are created to put the pieces back together, only with duct tape.

Microservices are the assembly code of the cloud – low-level building blocks that facilitate the execution of differently configured and optimized code bundles. Developers and operators must consider instance counts, scaling rules, topology and service definitions, pod structures, compute and datastore optimizations, service discovery, and the specific tools required for their particular business and application.

As a result, neither solution’s benefits are fully realized. The benefits of neither solution are fully realized as a result. As companies accelerate their digital transformations, they are coming to the harsh realization that employing microservices, and to a similar degree, serverless paradigms, is costlier, harder to hire for, more complicated to introduce, integrate with, and developer/operate for than their current approaches.

Cloud is expensive compared to traditional datacenters when running at scale, and the complexity of microservices-based architectures makes it easy to fall into expensive anti-patterns. This leads to hiring more infra and platform engineers who are in high demand, but due to the inherent low-level nature of microservices, their ability to fundamentally absorb the complexity from the rest of the organization almost always falls short.

The Ideal New Architecture

It is important to ask, “What aspects of computer engineering can be applied in order to close that gap?”. There is a need for a new architecture that combines the convenience of monoliths with an adaptive system. This system leverages previous architectures behind the scenes, and for it to be effective, the developer’s cognitive load must be significantly reduced.

A solution should…

  • maintain benefits from existing architectures
  • keep tools and programming languages usable
  • integrate with an ecosystem instead of trying to replace it
  • ensure user code is recognizable, debuggable, and patchable–even in production

Monoliths, microservices, and serverless architectures each have tangible benefits, and a new architecture must be able to provide those benefits while reducing the complexity of gaining them.

As a result, it should be tailored to meet the existing skill sets of developers and operators, rather than simply introducing a new development model. The ecosystem already provides many solutions for many of the challenges that companies and organizations face. The architecture must complement those services rather than attempt to replace them to accelerate adoption.

Moreover, there will be no significant industry adoption if developers, operators, and DevOps practitioners cannot operate, debug, and patch their applications in production.

The concept of ease of use should be the north star, and a solution has to focus on a higher level developer and operator intent rather than low-level microservices. This means that a descriptive mechanism needs to codify that intent.  Programmers should write code in a way that suits them. The solution should use their intent early on to determine what backend wiring and analysis are needed behind the scenes to meet their requirements. Then operators should be able to make quick modifications to their operating services.

Build and Deploy

The build and deploy process is crucial for any new service architecture. These processes and tech stacks are critical to organizations and similar to the last section, the architecture should complement these efforts rather than to replace them completely.

In the new architecture, there will be increased importance for the integration layer that brings together all the modular services built either in a mono repo or multi-repo fashion. Parallel execution, testing, deployment, branching, and the classic integration steps will become even more important. Most distributed systems will be aware of their various parts in this world, and the build system must maintain the benefits introduced with microservices.

Existing infrastructure solutions often do something really clever. They show users how to code and how to build and deploy. If you follow their instructions, you’ll gain the benefits advertised.

However, someone has to run and operate it. Many people say, well, it’s open source. You can look at it, download it, and make changes to it. In reality, most companies do not have the time or expertise to do this. Eventually, you end up in a situation where an entire company’s technology and business are dependent on someone knowing how to manage a system that’s half managed by someone else.

A solution should be simple enough for you to operate yourself, and in the event that something goes wrong, you should be able to fix it without relying on outside intervention.

Summary

Ultimately, the tools enabling this new architecture should absorb the complexity, not pass it on to another developer, operator, or third party.

With Klotho, we’re implementing a new architecture based on those principles, bringing together three disciplines that haven’t been historically applied together: compiler theory, applied distributed systems, and constraint-based planning. Your application is the starting point, and you write your own code. In your code, you provide us with your intent through high-level annotations. You run Klotho, and that’s it; your application is now cloud-native

From your experience, what do you think is important to have in the next cloud architecture?

Joseph Sibony
Joseph Sibony reading time: 5 minutes minutes April 25, 2022
April 25, 2022

Table of Contents

Related Posts

5 minutes Platform Engineering vs DevOps: A Comprehensive Comparison

Read More  

5 minutes These 4 advantages of caching are a game-changer for development projects

Read More  

5 minutes Build Cache Today and Tomorrow

Read More