X
100201

KubeCon EU shows serverless advancing on general-purpose computing, but challenges remain

August 31 2020
by William Fellows


Introduction


The recent KubeCon + CloudNativeCon EU 2020 was the first virtual outing of the Cloud Native Computing Foundation's signature event. The energy levels were similar to the physical event – enhanced by the useful Slack channel communication – with the breadth and depth of subject matter perhaps even greater in this format. The colocated events continue to be lively and vital sector-specific deep dives. With serverless use and interest growing (see figure below), the Serverless Practitioners Summit attracted over 4,100 attendees – this report examines some of the issues raised during the Summit.

The 451 Take

Coming out of KubeCon + CloudNativeCon EU 2020, there are a few things we know for sure: Serverless is now the target for more general-purpose applications, enterprises are demanding (and getting) multi-cluster Kubernetes support, and service mesh is an emerging need as deployments grow. Cloud native is walking the walk as a first-class tech citizen, and heading to all points in terms of future development. Key questions remain: How much compute (or IaaS) eventually goes serverless, what's beyond functions as a service (FaaS), and how can entire cloud-native applications be deployed on serverless?

So much more than FaaS


Serverless is now much more than FaaS (popularized by AWS Lambda), in which compute is bundled as one or more functions (Azure Container Instances, Azure App Service, Google App Engine and AWS Fargate are other examples). It is characterized by the use of HTTP and a few other sources, and is typically functions-only, with limited execution time, no orchestration and limited local development experience. However, many vendors now have a range of serverless offerings beyond compute, such as object storage (Azure Blob, AWS S3, GCP Storage), databases (Azure CosmosDB, AWS DynamoDB, GCP Firestore), messaging (Azure EventGrid, AWS SNS, GCP Pub/Sub) and analytics (Azure Monitor, AWS Kinesis, GCP BigQuery). These are coming together to run bigger applications in the cloud – and entire cloud-native applications on serverless – and have common conditions: no provisioning or management of infrastructure or platform, automatic elastic scalability, and consumption-based pricing (vs. pay for provision).

With the advent of Kubernetes and Knative, serverless containers have moved this sector along with frameworks to auto-scale containers and offerings using managed services, completely abstracting Kubernetes. This brought microservices into the serverless mix alongside functions, and services have become more polyglot and portable.

Missing parts


Now that the maturity and benefits of serverless are being recognized more widely, providers have started adding missing parts to make serverless suitable for more general-purpose workloads. These include basic state handling, overcoming 'cold starts,' the use of integration patterns and advanced messaging capabilities – blended with enterprise PaaS and enterprise-ready event sources.

However, providing state to serverless approaches remains a key challenge and barrier to use for general-purpose computing. Serverless functions are mostly short-lived and lose any 'state' or context information when they execute. Restoring this across millions of hyperscale instances (e.g., by using external databases) is complex, but without state, applications can't scale automatically. Accessing data in serverless is a bit like a mobile application, where there is no shared memory, massive concurrent execution and security concerns. Reading from databases is hard (connection management, access pooling), so enabling servers to connect to data services or data APIs instead of a database is an alternative – abstracting away the back end so that developers deal only with an API. Tools such as the OSS GraphQL aim to solve this, and there are several commercial approaches. Exposing transaction logic into an API endpoint or service gateway, breaking transactions into events, integrating business logic, and triggering serverless on changes are other approaches. The CNCF's CloudEvents specification for describing event data in a common way (the initial output from the CNCF Serverless Working Group) is moving forward as a CNCF incubator project.

Compelled to use


For some use cases, serverless may not be a sensible option – the cons associated with working around state (or other aspects) outweigh the benefits of serverless. If the stateful problem can't be outsourced into another service (a managed service, product or team) then serverless may not be the answer.

Ultimately, the reduction in operational overhead (the NoOps model) and cost (dial up and down to zero, pay only when used) is the key benefit of serverless – and the indirect costs of a managed service that scales up and down are more compelling than dollars spent on compute. Although the technology for building and operating serverless applications at scale still has rough edges, partly due to the lack of an open, implemented standard, many born-in-the-cloud companies and some enterprises are now adopting 'serverless-first' strategies to build their businesses and modernize their application estates, given the favorable economics and speed of development that it makes possible.

Figure 1
Adoption Status of Select Cloud-Native Technologies
451 Research, DevOps, Workloads & Key Projects 2020