X
101956

Raising a toast to cloud native: A nontech explanation of the cloud-native paradigm

May 13 2021
by Owen Rogers, Jean Atelsek


Introduction


'Cloud native' appears to be on everyone's mind right now, but it's easy to get lost in its new vocabulary – from microservices to service mesh. Here we present a simple analogy to aid the understanding of cloud native for a nontech audience.

The 451 Take

The fundamental benefit of cloud is rapid scalability of infrastructure – being able to deploy compute, storage and network in minutes without advance reservation. However, just because an application has access to more resources doesn't mean it can scale efficiently. Just because a goods manufacturer builds a bigger factory doesn't mean its supply chain grows also. Cloud-native is a set of architectural principles that allow applications to be managed efficiently at scale. The key principle is that the application is broken into components that are discrete independent functions called 'microservices.' Containers are the technology by which these microservices are packaged. These microservices can be updated, scaled up, and managed simply and independently, without needing to rebuild the whole application. Ultimately, this aids the user experience – new features can be added more quickly, performance can be improved by scaling more rapidly and bugs can be resolved swiftly. There are so many terms in the cloud-native paradigm that it's easy to become overwhelmed. Just remember that all of them have one goal – to allow applications composed of hundreds or even thousands of components to evolve, and to evolve quickly.

Building a toaster


Back in 2008, a designer by the name of Thomas Thwaites attracted media attention for trying to build a toaster from scratch – extracting iron, mica and copper from the ground and processing it to produce the plug, wires and heating elements that together make the $20 machine nearly all of us have in our homes. He succeeded, to a degree, but within seconds of plugging it in, it started to melt. This 'monolithic toaster' was built purely for fun and experimentation – no manufacturer would attempt such a feat.

But why wouldn't they? Why do toaster manufacturers use a multitude of suppliers for the components required, and why do their suppliers also use other suppliers for their materials? In a nutshell, each manufacturer is an expert at their specific task. They can do it efficiently at a lower cost than nonexperts could achieve, using differentiated technology and expertise. Furthermore, by not being tied to one supplier, the toaster manufacturer is able to use other suppliers if one can no longer furnish the components it needs, and the supplier is also free to sell its components to other manufacturers.

All of these suppliers are decoupled. They have relationships that are beneficial to all parties, but they aren't beholden to each other. Each is free to change its business model, technology and processes without asking permission, as long as the product is still profitable and attractive to the market. And each is free to exchange goods and services with others. This last point is key: Each manufacturer can use other suppliers if it needs to ramp up production.

A monolithic toaster


In our monolithic toaster, each component is part of the whole and can't easily be separated from the others. If a resistor (a small electronic component that dissipates energy) becomes obsolete, the manufacturer must retool the whole factory to make new resistors and redesign the toaster. If there is a shortage of silicon for the circuit board, it must build a new mine to extract it. If there is a surge in demand for toasters, the manufacturer must scale its whole operation to deliver. Of course, it could build a bigger factory – but it also needs to be able to scale every single aspect of its operation. Although a bigger factory would increase its potential to produce, its output might remain static if it can't find the materials. When such things happen, the manufacturer is trapped – vulnerable to any gap in the overall process.

Just like a toaster, an application has many components – there are functions that perform specific tasks. In our monolithic model, the developer builds this application as an integrated whole. If a part of the application needs to be changed or updated, the whole application must be changed or updated. If demand is putting a strain on database transactions, the whole application must scale rather than just the database. Of course, we can make the application bigger. Scalable access to infrastructure through the cloud has made this feasible – we could put it on a bigger cloud instance or across virtual machines. However, the problem remains: Because the system is structured as a tightly integrated monolith, bottlenecks in any part of the code can bring down the whole application.

The cloud-native toaster


No manufacturer would ever consider making a toaster from scratch. A supply chain exists to help the manufacturer scale production. In this model, if a resistor becomes obsolete, the manufacturer can use an alternative supplier. Its existing resistor manufacturer, being an expert, would likely already have developed a replacement. The resistor can just be swapped out rather than having to redesign the toaster from scratch. If the toaster manufacturer experiences a boom in demand, it can scale up its capacity by building a new factory – but it can also scale up its output by using more suppliers.

In a cloud-native model, components of the application are separated and self-supporting – these components are called microservices. Each is a distinct function with a specific purpose that doesn't rely on any other component. Interactions with each component are often performed via APIs (application programming interfaces), which define a standard format for making a request to the microservice. If one service needs to be updated, it can be done independently of the rest of the application. If there is a boom in demand, microservices can be scaled independently based on what is required at the time. If there is sudden need to write more database transactions, for example, microservices to do this task can be added without expanding the whole application. The cloud enables these microservices to scale by providing scalable infrastructure. Containers are often the technology that is used to package these microservices as self-reliant parcels of code and libraries. Serverless is a model whereby microservice code can be executed without the developer being concerned with the underlying infrastructure.

Of course, with so many components, the manufacturer needs to keep track of which suppliers provide which components and how many have been ordered and delivered. If more components are needed, more must be ordered or more suppliers sought. This is the role of container orchestration, such as Kubernetes – to keep track of containers (and the microservices contained within them) so they can be scaled up or down.

What if suppliers could communicate among themselves to fulfil manufacturer demand, without having to go through a central authority in a way that could slow down production? The job of communicating among the suppliers in service of the manufacturer's needs is the role of the service mesh (Istio is an example) – to regulate and direct communication between microservices. As the number of microservices increases, the service mesh keeps messages flowing between new containers being implemented by the container orchestration platform. If a new microservice is added, service discovery tools can help this location be identified and tracked.

Perhaps the toaster manufacturer needs to add a new resistor to its toaster design. It could just pop in the new version and hope it works. Of course, the manufacturer would want to check that it fits with the rest of the design. Even though the resistor is a component, it must deliver to certain standards to keep the application working – for example, we would want to make sure our resistor is of a certain size and color. In our application, continuous integration (CI) takes the code that we want to update a microservice with, and it packages it with other libraries to make an image that can be used to replace the component like-for-like. These images are stored in a registry and are provisioned to the application via continuous development (CD), which swaps the old code for the new code. The image provides a unit of code that can be provisioned repeatedly to make the component scale to changing demands.

The toast of the town


The Cloud Native Computing Foundation publishes a map of the cloud-native computing landscape. Today, there are 924 parts of this map, with the CNCF reporting a combined market cap of $14.59 trillion and funding of $16.44 billion. While we've explained the main facets of the paradigm here, there are many, many more. But knowledge of all of them isn't necessary for the vast majority. This landscape just represents the tools developers can use to build cloud-native applications. Tools such as containers, service-meshes, CI/CD, microservices and orchestration will help architect such applications. However, these applications also need observability/monitoring tools, logging, proxies, visualizations, security controls and automation – these capabilities elevate cloud-native applications from scalable applications to scalable applications that can be secured and managed in production environments.

Successive waves of 451 Research's semiannual DevOps survey show organizations increasingly incorporating cloud-native technologies into their environments, shifting from plan to proof of concept to team-level adoption to full adoption (see figure below). The flexibility, scalability and velocity of cloud-native are creating a new paradigm for app deployment and disrupting the status quo. Born-in-the-cloud companies and digital leaders are using these technologies to be more flexible and adaptable to changing market conditions. Yes, there are challenges in terms of complexity and securely integrating the old with the new, but vendors and open source communities are solving for these issues and moving to a future where IT resources dynamically adapt to the application rather than having to work the other way around.

Figure 1
Adoption Status for Cloud-Native Technologies, 2H 2020
451 Research: DevOps, Organizational Dynamics 2020