Browse Chapters

How Do Serverless Apps Work?

In this chapter we will cover the three different systems used to run serverless functions. We’ll start with the oldest (Virtual Machines) and move to the newest and most promising (WebAssembly).

Virtual Machine-backed Serverless Apps

When Amazon originally created AWS Lambda, their goal was to find a good use for excess compute power–something they could charge users for, but that would only run for a short time when a virtual machine host was underutilized. Lambda functions could run for a few seconds or a few minutes on a host, then exit and return the compute power to the host to re-use.

As Lambda functions gained popularity, the system was rebuilt. Today, AWS keeps a large queue of dedicated VMs that can run Lambda functions. Each time a request comes in, a virtual machine is popped off the queue, the serverless function is loaded onto it and run to completion. Then the entire VM instance is torn down. (Recent optimizations sometimes re-use the virtual machine for subsequent requests within a short window of time.)

Azure Functions and other cloud providers follow the same pattern.

Why start thousands or tens of thousands of virtual machines and keep them in a queue waiting for work to come in? The answer is in VM technology itself. VMs are slow to start, taking a few seconds or more in some environments. Users will not wait that long for a request to be processed. So pre-warming virtual machines is a way to optimize startup time at the (considerable) cost of consuming CPU and memory for idle VMs in the queue.

This is a problem completely solved by WebAssembly.

But first, let’s look at container-based serverless apps.

Container-based Serverless Apps

Instead of using virtual machines as a compute layer, it is possible to use containers instead. The security boundary is not as strong, but the packaging and shipping of container images is simpler.

Some open source systems like OpenWhisk, Knative, and OpenFaaS use this method.

In these cases, there are two ways to build a runtime.

The runtime can work like the standard VM queueing, but with containers. In this case, compute power is reserved for a container. when a request comes in, the container starts, runs the function to completion, and shuts down. Because container startup time takes a few seconds or longer, this method is too slow for many front-line use cases. (And this problem is, similarly, solved with WebAssembly.)

The second method is to actually run a container all the time, having it answer multiple requests. In this case, it is not “serverless” in the definition of serverless apps we gave early in this guide, but because of the way the software libraries look, a developer may not see the server startup and runtime in their own code. In other words, the server is there in the code, and is part of the user’s software, but is tucked out of view… until something goes wrong and the user has to debug that server code inside of their app.

WebAssembly Serverless Apps

The newest and fastest engine for running serverless apps is WebAssembly. A WebAssembly runtime has almost no startup time, taking less than a millisecond (several orders of magnitude faster than VMs and containers) to cold start.

Each serverless app invocation is executed in its own security sandbox, and is not (like containers) given unbridled access to the kernel or system libraries. Once the request is handled, the entire serverless function is torn down. Because these functions are small and fast, a modest computer or virtual machine instance can securely run thousands of different apps at a throughput of 100k or more requests per second. Contrast this with the VM approach (1 app and 1 request per short lived VM) or the container method (30-ish apps per underlying virtual machine or bare metal) and it is evident why WebAssembly is the next generation of serverless.

Why WebAssembly is the Best Runtime for Serverless Apps

WebAssembly offers some notable advantages for serverless workflows, which is why Spin uses WebAssembly as the engine for its serverless apps.

  • Security: The WebAssembly sandbox is secure by default, allows only the capabilities explicitly enabled by the platform operator, and stops apps on the same host from interfering with one another.
  • Compactness: WebAssembly binaries are no larger than the compiled code. This is much smaller than Docker containers, which add on substantial overhead
  • Cross-platform: The same WebAssembly binary can run on Windows, macOS, Linux and other operating systems without a recompile. Likewise, that same binary can run on Intel’s architecture, Arm’s or many others. Meanwhile, both VM and container-based serverless functions are bound to both a single OS and a single architecture, and some OSes (like macOS) are not supported natively by containers.
  • Fast: Cold start times under 1 millisecond and execution times nearing native application execution makes WebAssembly a good fit for serverless functions.

These things make WebAssembly a better serverless alternative than VMs and containers.

Serverless Apps in Kubernetes

Kubernetes is the most popular orchestration system today. Early on, it could only schedule containers. But as it has matured, it can schedule both virtual machines and WebAssembly apps as well.

When it comes to serverless app platforms, there are a few container-based solutions:

And when it comes to WebAssembly serverless apps, Spin applications can run inside of a few environments:

In all of these cases, Spin apps are run natively inside of Kubernetes, and not merely packaged into a container.

Browse Chapters

Quickstart Your Serveless Apps with Spin

Get Started