Browse Chapters
Kubernetes and Serverless Apps
Earlier in the Complete Developer’s Guide to Serverless Apps, we defined serverless and talked about how it relates to other cloud technologies. In this section, we will cover how serverless apps fit into the Kubernetes story.
Kubernetes
Kubernetes was originally designed as a system for orchestrating deployments of Docker containers. Based on Google’s Borg scheduler, Kubernetes introduced a declarative YAML-based language for expressing how an application should be run. From wiring up network services to providing storage to describing scaling techniques, Kubernetes describes all aspects of an application’s runtime needs.
Over the first decade of Kubernetes’ development, it became more flexible as well. While traditional long-running server processes have remained its strength, it is now flexible enough to accommodate a variety of runtime primitives beyond just containers.
One way that Kubernetes’ runtime characteristics can be extended is via the Containerd runtime. Containerd is responsible for executing a workload (usually a container) on Kubernetes’ behalf. Thanks to its pluggable architecture, Containerd can also execute non-container workloads using an extension mechanism the project calls “shims.” There are shims for a variety of runners, including WebAssembly Spin applications.
Kubernetes Serverless App Platforms
Over the last decade, a few serverless platforms for Kubernetes have emerged. We’ll cover a few of the older ones first, and then talk about SpinKube, the newest, fastest, and most scalable serverless app platform for Kubernetes.
KNative
An early entrant into the Kubernetes serverless world was KNative. Executing each serverless workload in a container, KNative provided all the CRDs and operators inside of Kubernetes to simulate a Lambda-like environment. While it is conceptually complex, KNative is at its heart an event-driven serverless environment. But functions in KNative aren’t quite serverless in the sense we described them in the serverless app definition because they are actually long-running processes running inside of containers.
OpenFaaS
OpenFaaS is a less complex alternative to KNative. Like KNative, it packages long-running servers inside of container images, but encourages the developer to focus on writing serverless event handler functions. Lately, the OpenFaaS project seems to have shifted focus to deploying any applications, not just serverless functions.
OpenWhisk
IBM’s cloud functions-as-a-service cloud offering is built on their own open source serverless solution called OpenWhisk. OpenWhisk is an official Apache project, and provides a serverless app platform that runs, among other places, inside Kubernetes. It also packages functions inside of containers, which are invoked per request. While OpenWhisk’s cold start times can be very slow (with one user reporting more than 30 seconds of startup time per function request), it is a solid example of a serverless platform.
Fn Project
Oracle Cloud’s FaaS offering is built on Fn Project. Like other first-generation serverless app platforms, it packages workloads into containers. The open source project seems to have stalled recently, and may not be gaining new features.
SpinKube: A New Serverless Platform for Kubernetes
Unlike the other serverless platforms reviewed above, SpinKube does not run containers. Instead, it uses WebAssembly-based serverless apps. While containers take a few seconds to cold start, WebAssembly apps in SpinKube take less than one millisecond–faster than the blink of an eye. That means three things:
- Far more serverless functions can be run per node in your Kubernetes cluster. Usually, you’ll run out of IP addresses before you run out of memory or CPU for Wasm serverless apps
- More serverless functions can be invoked simultaneously, which means higher throughput. We’ve been able to run in excess of 100,000 requests per Kubernetes node
- Cost goes down. Just a couple of modest-sized virtual machine nodes can run hundreds of serverless apps and still maintain failover and resilience requirements.
Once deployed into a Kubernetes cluster, SpinKube can schedule WebAssembly applications within the same pods as containers, as well as in SpinApp
manifests. All Spin applications are supported by SpinKube, and most WASI-compliant WebAssembly apps can also run in SpinKube, provided they are compliant to the latest version of the WASI specification (0.2 as of this writing).
Spin apps can be written and tested locally, and then deployed into Kubernetes as WebAssembly binaries. There is no need to wrap the binary in a container. WebAssembly applications can be packaged into the same packaging format that containers use (OCI Distribution), which mean Spin serverless apps can be stored in DockerHub, GitHub Container Registry, and other OCI-compliant repositories.
Comparisons
Platform | Official Languages | Cold Start | Packaging | Actively Maintained | Hosted Cloud |
SpinKube | JS/TS, Go, Rust, Python, .NET | <1 msec | Wasm | Yes | Fermyon Cloud |
Knative | Go, Java, JS/TS | >5 sec | Container | Yes | N/A |
OpenFaaS | Go, JS/TS | >10 sec | Container | Yes | N/A |
OpenWhisk | JS/TS, Java, .NET, Rust, etc. | >5 sec | Container | Minimally | IBM Cloud |
Fn Project | Go, Java, JS, Python, Ruby | ? | Container | No | Oracle Cloud |
Browse Chapters