April 25, 2024

WebAssembly in Azure with Azure Kubernetes Service and SpinKube

Thorsten Hans Thorsten Hans

spinkube azure

WebAssembly in Azure with Azure Kubernetes Service and SpinKube

In this article, we will deploy SpinKube onto Azure Kubernetes Service (AKS) to run WebAssembly (Wasm) workloads right beside containers. On top of that, we will pull OCI artifacts (representing Spin Apps) from a private Azure Container Registry (ACR) leveraging underlying Azure Identities (also often referred to as passwordless).

By running Wasm workloads on Kubernetes, you can unlock the following advantages:

  • Security: Wasm modules are executed in a strict sandbox, and permissions to interact with resources like, for example - the file system or outbound network connectivity must be explicitly granted to a Wasm module
  • Speed: Wasm itself is fast! You can expect near-native runtime performance. On top of that, you can expect cold-start times less than a millisecond when using SpinKube.
  • Density: There are two factors that lead to high density:
    • Serverless Wasm workloads (Spin Apps) are instantiated (cold-started) per trigger-event (e.g. incoming HTTP request). This results in apps consuming resources only while doing actual work, which allows other apps to use the same resources when they get triggered.
    • Wasm workloads consume only a fraction of resources compared to regular containers, which allows you to deploy more workloads to a particular cluster.
  • Elasticity: By combining the speed and density provided by Wasm with the horizontal cluster auto-scaling capabilities provided by AKS, you can build elastic serverless compute platforms that dynamically adjust their size to the load they’re facing.

Prerequisites

To follow along with the instructions in this article, you must have the following tools installed on your local machine:

On top of that, an Azure Account is required.

Your Azure account will be charged for cloud resources deployed as part of this article and transaction-based fees (such as Azure-Outbound traffic).

Deploying the Azure Infrastructure

For the sake of this article, we will provision the required cloud infrastructure using Azure CLI (az). We will provision the following resources in Azure:

  • 1 Azure Resource Group: An Azure Resource Group is a logical container for Azure resources
  • 1 Azure Container Registry (ACR): ACR is a private registry service that stores and manages OCI artifacts
  • 1 Azure Kubernetes Service (AKS): AKS is a fully managed Kubernetes service, simplifying the deployment, management, and scaling of distributed apps

For real-world scenarios, you should consider using infrastructure as code stacks such as Terraform, Pulumi, or - in the case of Azure - Bicep.

# Allocate a random number (used for ACR suffix)
suffix=$((RANDOM % 900 + 100))

# Variables
location=germanywestcentral
rgName=rg-spinkube-on-azure
acrName="spinkube${suffix}"
tokenName=spincli

# Azure Resource Group
az group create --name $rgName \
  --location $location

# Azure Container Registry (ACR)
az acr create --name $acrName \
  --resource-group $rgName \
  --location $location \
  --sku Standard \
  --admin-enabled false

## Create a Token (which we will use for Spin CLI)
tokenPassword=$(az acr token create --name $tokenName \
 --registry $acrName \
 --scope-map _repositories_push \
 -otsv --query "credentials.passwords[0].value")

# Grab the resource identifier of the ACR instance
acrId=$(az acr show --name $acrName -otsv --query "id")

# Azure Kubernetes Service (AKS)
az aks create --name aks-spinkube \
  --resource-group $rgName \
  --location $location \
  --tier Free \
  --generate-ssh-keys \
  --node-count 2 \
  --max-pods 75 \
  --attach-acr $acrId

Although the following script is quite simple, I want to highlight some parts of it:

  1. The ACR is created using Standard SKU, which allows us to use tokens and scope-maps.
  2. The admin account of the ACR is disabled.
  3. An ACR token (tokenName) is created with permissions to push OCI artifacts, and its password is stored in the tokenPassword variable. We’ll use this ACR token to authenticate our local spin CLI later in this article
  4. Upon creating the AKS cluster, the ACR instance is attached using its identifier (acrId). Azure will automatically configure IAM, which allows us to pull OCI artifacts from ACR without specifying credentials in Kubernetes (passwordless)

Once the AKS cluster is provisioned, we must download the corresponding credentials for interacting with the cluster using kubectl:

# Download Credentials for AKS
az aks get-credentials -n aks-spinkube \
  --resource-group $rgName

You can check at anytime which Kubernetes cluster is active in the context of kubectl using the kubectl config get-contexts command. At this point, it should show aks-spinkube as current context:

# List kubectl contexts and check if aks-spinkube is current
kubectl config get-contexts

CURRENT   NAME                          CLUSTER
*         aks-spinkube                  aks-spinkube
          k3d-wasm-cluster              k3d-wasm-cluster

Deploying SpinKube

The SpinKube documentation provides in-depth guides for deploying SpinKube on top of Kubernetes. For the sake of this article, we will rush through deploying all the necessary pieces to get going quickly.

Let’s start with deploying cluster-wide resources:

# Deploy Spin Operator CRD
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.crds.yaml

# Deploy SpinAppExecutor CRD
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.shim-executor.yaml

# Deploy the RuntimeClass
kubectl apply -f https://github.com/spinkube/spin-operator/releases/download/v0.1.0/spin-operator.runtime-class.yaml

Next, we deploy cert-manager and Kwasm:

# Install cert-manager CRDs
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.crds.yaml

# Add Jetstack & KWasm repositories to Helm
helm repo add kwasm http://kwasm.sh/kwasm-operator/
helm repo add jetstack https://charts.jetstack.io

# Update Helm repositories
helm repo update

# Install the cert-manager Helm chart
helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.14.3

# Install KWasm operator
helm install \
  kwasm-operator kwasm/kwasm-operator \
  --namespace kwasm \
  --create-namespace \
  --set kwasmOperator.installerImage=ghcr.io/spinkube/containerd-shim-spin/node-installer:v0.13.1

# Provision Nodes
kubectl annotate node --all kwasm.sh/kwasm-node=true

Finally, we can deploy the Spin Operator using its Helm Chart as shown in the following snippet:

# Install Spin Operator with Helm
helm install spin-operator \
  --namespace spin-operator \
  --create-namespace \
  --version 0.1.0 \
  --wait \
  oci://ghcr.io/spinkube/charts/spin-operator

Creating a Spin App

Now that we’ve SpinKube installed on our AKS cluster, we can build a simple Spin App to verify everything works as expected. See the code in the following snippet used to create a simple Hello World sample application.

# Create a new Spin App
spin new -t http-go -a hello-spinkube

# Move into the `hello-spinkube` directory
cd hello-spinkube

Before compiling the source code down to the wasm32-wasi platform (using spin build), let’s change the implementation of the Spin App (at ./main.go) to match the following:

package main

import (
	"fmt"
	"net/http"

	spinhttp "github.com/fermyon/spin/sdk/go/v2/http"
)

func init() {
	spinhttp.Handle(func(w http.ResponseWriter, r *http.Request) {
		w.Header().Set("Content-Type", "text/plain")
		fmt.Fprintln(w, "Hello from SpinKube running in AKS!")
	})
}

func main() {}

Now we can build the Spin App and distribute it through a public container registry:

# Set variables
oci_artifact=ttl.sh/hello-spinkube-on-aks:24h

# Build the Spin App
spin build
  Building component hello-spinkube with `tinygo build -target=wasi -gc=leaking -no-debug -o main.wasm main.go`
  Finished building all Spin components

# Distribute the Spin App
spin registry push $oci_artifact
  Pushing app to the Registry.
  Pushed with digest sha256:86dbd1662de749bcfd58f1f44a352fc06b1e46703ef75911bdf94ce4053faa44

In the snippet above, we used ttl.sh which is an anonymous and ephemeral container registry that does not require any sort of authentication for pushing or pulling OCI artifacts.

Deploy the Spin App to AKS

Finally, we can deploy the Spin App to our AKS cluster. To do so, we can use the scaffolding capabilities provided by the kube plugin for the spin CLI. You can check if the kube plugin is installed on your machine, as shown here:

# Update Spin Plugin information
spin plugins update
  Plugin information updated successfully

# List all Spin Plugins (installed & available)
spin plugins list

# Upgrade the kube plugin (if outdated)
spin plugins upgrade kube

If the kube plugin is not installed on your machine, you can install it using spin plugin install kube. Follow the commands in the next snippet to scaffold the necessary Kubernetes deployment manifests for our Spin App:

# Scaffold Kubernetes Deployment Manifests
# Store them in the spinapp.yaml file
spin kube scaffold -f $oci_artifact > spinapp.yaml

Take a look at the spinapp.yaml file, as you can see, our manifest is an instance of the SpinApp CRD, which has been deployed to the AKS cluster in the previous section:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-spinkube-on-aks
spec:
  image: "ttl.sh/hello-spinkube-on-aks:24h"
  executor: containerd-shim-spin
  replicas: 2

Although there are different ways how you may want to deploy your apps to Kubernetes (kubectl, GitOps, Helm Charts, …), we’ll use good old kubectl apply for now:

# Deploy to the AKS cluster
kubectl apply -f spinapp.yaml
  spinapp.core.spinoperator.dev/hello-spinkube-on-aks created

The Spin Operator takes care of provisioning and managing necessary Kubernetes primitives of our Spin App. You can use kubectl to inspect what has been created:

# List SpinApps,Deployments,Pods and Services
kubectl get spinapps,deploy,po,svc
NAME                                                  READY   DESIRED   EXECUTOR
spinapp.core.spinoperator.dev/hello-spinkube-on-aks   2       2         containerd-shim-spin

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-spinkube-on-aks   2/2     2            2           111s

NAME                                         READY   STATUS    RESTARTS      AGE
pod/hello-spinkube-on-aks-547dcb5b47-qqmhz   1/1     Running   0             111s
pod/hello-spinkube-on-aks-547dcb5b47-89x2x   1/1     Running   0             111s

NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/hello-spinkube-on-aks   ClusterIP   10.43.213.34    <none>        80/TCP     111s

To call the endpoint exposed by our Spin App, we can configure port-forwarding and use a tool like curl:

# Setup port-forwarding
kubectl port-forward services/hello-spinkube-on-aks 8080:80
  Forwarding from 127.0.0.1:8080 -> 80
  Forwarding from [::1]:8080 -> 80

From within a new terminal instance use curl and send an HTTP GET request to localhost:8080, which will be forwarded to port 80 of the hello-spinkube-on-aks service running inside of the AKS cluster:

# Send a HTTP GET request to the Spin App
curl -iX GET http://localhost:8080

HTTP/1.1 200 OK
content-type: text/plain
content-length: 36
date: Mon, 15 Apr 2024 15:17:22 GMT

Hello from SpinKube running in AKS!

Distributing Spin Apps via Azure Container Registry

Spin Apps are packaged and distributed as OCI artifacts - as you have already learned. This also means, that you can use the Azure Container Registry (ACR) to distribute your Spin Apps without exposing them to the public. Because we attached the ACR instance to our AKS cluster while provisioning the cloud infrastructure, we can rely on Azure Identities, which are automatically created and assigned by Azure. This allows our AKS cluster to pull OCI artifacts from ACR without specifying some sort of credentials (which is often referred to as passwordless).

However, we have to authenticate to use our local spin CLI with ACR. See the following snippet, which authenticates spin CLI against our ACR instance and pushes the hello-aks app to ACR:

# Authenticate against our ACR instance
spin registry login -u $tokenName -p $tokenPassword $acrName.azurecr.io

# Push the hello-aks app to ACR
spin registry push $acrName.azurecr.io/hello-aks:0.0.1

# Re-create the Kubernetes Manifests (spinapp.yaml)
spin kube scaffold -f $acrName.azurecr.io/hello-aks:0.0.1 > spinapp.yaml

Looking at the spinapp.yaml now, you should see the image property pointing to an OCI artifact in your ACR instance:

apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-aks
spec:
  image: "spinkube001.azurecr.io/hello-aks:0.0.1"
  executor: containerd-shim-spin
  replicas: 2

Again, you can use kubectl apply -f to deploy the Spin App to the AKS cluster:

# Deploy the Spin App
kubectl apply -f spinapp.yaml

# Check which images are used by the Spin App
kubectl describe po -l core.spinoperator.dev/app-name=hello-aks | grep image

  Normal  Pulling    3m9s  kubelet   Pulling image "spinkube296.azurecr.io/hello-aks:0.0.1"
  Normal  Pulled     3m8s  kubelet   Successfully pulled image "spinkube296.azurecr.io/hello-aks:0.0.1" ...
  Normal  Pulling    3m9s  kubelet   Pulling image "spinkube296.azurecr.io/hello-aks:0.0.1"
  Normal  Pulled     3m8s  kubelet   Successfully pulled image "spinkube296.azurecr.io/hello-aks:0.0.1" ...

Finally, we can configure port-forwarding again, and use curl to call into the Spin App:

# Setup port-forwarding
kubectl port-forward services/hello-aks 8080:80
  Forwarding from 127.0.0.1:8080 -> 80
  Forwarding from [::1]:8080 -> 80

From within a new terminal instance use curl and send an HTTP GET request to localhost:8080, which will be forwarded to port 80 of the hello-aks service running inside of the AKS cluster:

# Send a HTTP GET request to the Spin App
curl -iX GET http://localhost:8080

HTTP/1.1 200 OK
content-type: text/plain
content-length: 36
date: Mon, 15 Apr 2024 15:17:22 GMT

Hello from SpinKube running in AKS!

As you can see, the private image got pulled from ACR by using the underlying Azure Identity, and the Spin App works as expected.

Removing The Azure Resources Again

You can remove the Azure resources we created as part of this article using the following command:

# Delete all Azure Resources created as part of this article
az group delete --name rg-spinkube-on-azure \
  --yes \
  --no-wait

Removing the resources will take several minutes. By specifying the --no-wait flag you can prevent your terminal instance from being blocked during that time period.

Conclusion

With SpinKube we are finally able to run WebAssembly workloads as first-class citizens on top of Kubernetes, which allows us to cut cloud spendings and drive resource utilization even higher. Additionally, we don’t have to over-provision to handle unexpected loads in a reasonable amount of time. Wasm workloads start almost instantly and perform at near-native speed. Among others, Azure Kubernetes Service is a popular, resilient, and robust managed Kubernetes offering which is supported by SpinKube since its inception.

Being able to use proven Azure patterns like relying on Azure Identities to pull OCI artifacts from Azure Container Registry is crucial and demonstrates that SpinKube integrates seamlessly into existing cloud infrastructures and is able to act as your individual, dense and predictable serverless compute platform.

 

 

 


🔥 Recommended Posts


Quickstart Your Serveless Apps with Spin

Get Started