August 15, 2023

NoOps and Serverless Are the Perfect Pair

Matt Butcher Matt Butcher

Cloud NoOps Serverless

NoOps and Serverless Are the Perfect Pair

NoOps and Serverless Are Not (Just) Buzzwords

As an engineer, I am skeptical of any word that sounds buzzy yet needs a clear definition. Yet I’ve already used two such words. If we are going to talk about these trends with intellectual honesty, we’re going to need some clear definitions. This article aims to highlight these concepts to the point where you can find comfort in the idea that NoOps and Serverless are the perfect pair.

NoOps, a Clear Description

NoOps (short for No Operations) is a statement about the absence of operational aspects of a service. From load balancers to databases, there is no shortage of services in today’s cloud. Operational platforms like Kubernetes expose as many configuration options as possible, giving platform engineers tremendous flexibility. But for those of us who only want to write application code, we’re still overwhelmed by myriad options. Most application developers don’t want to spend valuable time tuning a database instance, optimizing load balancers or performing routine security operations such as adding (and refreshing expired) Secure Sockets Layer (SSL) certificates. From an application developer’s perspective, the services that support an application should be available without the developer needing to install, configure, and manage them. This is where the concept of NoOps shines.

NoOps enables the developer’s desire to focus on code: The infrastructure layer is operated or automated on the developer’s behalf. Consider a database. In a NoOps environment, developers do not install the database in their development environment or their production environment. The developer does not have to create credentials, manage access controls, configure security or even work with a connection string. All of this is done at a lower level. The developer merely declares the intention to use a database (perhaps in an application configuration) and then begins working with the database (creating tables inserting data, and querying).

NoOps is about keeping the application developer’s focus on the application code, not the environment in which it runs.

Here are a few key things a service must provide to be NoOps:

  • There is no local installation of services in the developer’s environment.
  • The local and cloud instances of an application are where the developer desires to have them. That is, no installation or service provisioning is necessary; the software stack already exists.
  • The developer never manages connections, usernames, passwords, connection strings, security tokens, or SSL/TLS certificates.
  • The ongoing operation of the application is automatic. The developer does not have to manage software upgrades (database or key/value storage software versions). Naturally, the developer will manage the application’s data to adhere to the application’s business logic, creating, updating and documenting the schema.

Serverless

Now let’s look at serverless. What is the “server” that we are doing without when we talk about “serverless”? While there are a few ways we could answer that question, my preference is to answer by saying that serverless is a programming pattern in which the application responds to a single event. Contrast this with server programming (the opposite of serverless) in which the developer creates a software server — a daemon process in Unix parlance — that listens on a socket and manages many requests over time. In the latter, server programming scenario, the physical server runs for days, weeks, or months. In the former, serverless application scenario, the server is an ephemeral Virtual Machine (VM) instance that runs for milliseconds, seconds, or minutes (i.e. only the time it takes to respond to an incoming request). If you would like to read more about this, we have a blog post on why serverless is the future of the cloud.

Given this definition, we are talking about frameworks like AWS Lambda, Azure Functions, Cloudflare Workers, and similar implementations, sometimes referred to by the clunky name Functions as a Service (FaaS).

I use the term serverless application to describe an application composed of one or more serverless functions. Every existing AWS Lambda function, Azure Functions function, and so on is a serverless application with exactly one serverless function. Frameworks like Spin (which we’ll discuss later) allow developers to declare that multiple functions (components) are to be grouped (and hence deployed atomically) into a single application. Since both follow the same architecture and development patterns, I prefer the more general serverless application term instead of serverless function or FaaS. But if you prefer, you can mentally substitute your preferred term.

Serverless applications are well suited for distributed computing, as each invocation of each function is stateless. In the case of the Spin framework, each event triggers a new instance of the function. The function is executed in a memory-safe, sandboxed environment instance of a WebAssembly (Wasm) VM, and therefore the execution process always starts from a fresh state. As an aside, this default safety aspect of Wasm is brilliant for running many serverless applications on shared hardware, but I digress. Regarding statefulness, the serverless application needs to be persistent across requests. Information between each separate invocation must be stored in some persistent storage. Examples of persistent storage include key/value, NoSQL, and SQL databases. And this is where the NoOps services pair with serverless.

Let’s look back at the way things were, to provide some additional context so that we can then circle back and crystallize our point that NoOps and Serverless are the perfect pair.

The Old Way

As a development paradigm, serverless applications have done well. CNCF reports that almost 5 million developers have written serverless applications, and at a Lambda session at AWS re:Invent, the speaker said that Lambda (the original serverless platform) handles over 10 trillion invocations per month. As we have talked to developers about serverless, we have heard the same refrain time and time again:

I love writing serverless applications, but… serverless frameworks are slow, locked into providers, and are hard to wire up.

This story tells us two things very clearly:

  • Developers love something: the serverless pattern.
  • But they are looking for improvements in three areas:
    • speed,
    • portability, and
    • developer experience.

Let’s discuss speed, portability and developer experience.

Speed

A serverless application responds to an event. For example, most serverless applications deployed today (according to a report by Datadog) are triggered by an HTTP event in which they receive a request and return a response. An event on a queue may trigger other serverless functions, a timed event (like cron), or something else entirely.

But in all these cases, startup time is essential — especially when users wait for a response. The most popular serverless implementations (AWS Lambda, Azure Functions, and Google Cloud Functions) tend to have slow startup times, requiring between 200 and 500 milliseconds to start up. That means it may take a half-second even to begin executing your code. When user research (and Google’s page ranking algorithm) suggest that the application should begin delivering its response to the user within 100msec, it becomes clear why startup time is so important.

Efficient and fast execution times, and network response times, are equally important. Every hop between service gateways reduces efficiency on this front.

Fortunately, the solution to the speed issue is finding the right compute engine to execute serverless functions. We’ve often written about why Wasm is the right engine for this. Spin and Fermyon Cloud can cold-start your serverless application in less than a millisecond. But the other two problems are sensitive to considerations beyond the compute runtime.

Portability

The first thing I, as a developer, think of when I hear the word “portability” is running the same application on different operating systems and architectures. This is an attribute that the serverless developer crowd asks for. Developers want to write code on their preferred Operating System (OS) and architecture without worrying about building applications that match the server architecture and OS.

Once more, Spin solves this problem with Wasm.

Build on Windows and Intel, deploy on Linux and Arm64, and everything works. No recompiling, no system-specific libraries … it just works. One proof of this: regardless of what OS or architecture you build on, you can deploy your Spin app to Fermyon Cloud.

But that’s only half of the portability story. The second half has to do with running the application in different environments.

“When I write a Lambda function, I’m stuck on a particular AWS service.”

That’s how one user described their frustration with Lambda. Another more operationally minded person described Lambda as “Frankenstein’s monster to our otherwise well-integrated Kubernetes environment.” To that user, the problem was their serverless applications operated outside of the parameters that all of their servers (containers) used. And that led to a litany of exceptions, particular configurations, and edge cases that became operational burdens.

There are two parts to alleviating this frustration:

  1. The runtime in which serverless applications run must be portable across various environments.
  2. The developer must be freed from having specific operational knowledge of each deployment environment.

The first, being an attribute of the runtime, entails that the runtime be made available as a stand-alone unit (e.g. not as a service only, as is the case with Lambda). That stand-alone runtime can then be integrated into various environments, such as Kubernetes and Nomad clusters, small environments like Raspberry Pi’s, edge environments, etc. (Meeting this objective is why we made it possible to run Spin apps in many environments.)

NoOps hits its strides by giving the developer a portable runtime without asking the developer to write their code for a specific deployment environment.

After all, customizing your application for any single specific environment means that your application’s portability is immediately reduced.

NoOps patterns provide a way to reverse the provision-it-yourself cloud trend that is present in mega-clouds like AWS and Azure. With NoOps, the developer is given a ready-made set of environmental services (Key/Value storage, relational database, domain mapping, proxying, and so on). When a developer runs their application via one of the many environments, the services are so integrated into the toolchain that the developer doesn’t even concern themselves with managing connections or credentials, let alone installing and configuring servers or services. The array of NoOps services mentioned above can simply become part of that portable runtime.

Another way to think about this is using the programming terminology of interfaces and implementations: The developer is guaranteed access to an implementation of, say, the key/value interface, the SQL interface, and so on. The platform must provide the implementation (or raise errors during development and deployment, not during runtime). Note that the requirements are more precise than the interface/implementation suggests. For example, it’s not a matter of providing merely a db.execute() function but one which executes SQL statements written in a particular flavor of SQL. For example, the developer may be guaranteed an SQLite-compatible database and, therefore, is guaranteed that all SQL written in that dialect will execute as expected.

Once more, in such an environment, the server’s operational aspects are beyond the developer’s purview, and hence what we are describing is a NoOps developer experience.

What’s the New Way?

I’ve written before about how the future of serverless is Wasm. And that is indeed a huge part of the equation. We’ve begun realizing this vision with Fermyon Spin and with Fermyon Cloud.

Spin is the framework (and tooling) for building serverless applications. It is powered by Wasm, which is an excellent portable binary format. The main Spin program (aptly named spin) streamlines the developer experience, making it easy to create new applications from Spin templates, build them into Wasm binaries, test them locally, and deploy them upstream. The spin program can also act as a runtime and run in smaller environments like Raspberry Pi or large and robust environments like Kubernetes, Nomad, and Docker Desktop.

Spin includes built-in NoOps services such as key-value storage and SQLite storage. How NoOps-friendly are these?

SQLite Storage (A JavaScript Example)

To answer the question, let’s go ahead and whip up a quick application. If you haven’t already, please go ahead and install Spin.

Upgrading Spin: If you have Spin installed and are interested in checking your version and possibly upgrading, please see the Spin upgrade page of the developer documentation.

Please visit our Building Spin Components in JavaScript documentation to ensure your system can compile JavaScript programs to Spin components.

We create our application (in this case using the http-js template) via the spin new subcommand:

$ spin new http-js
Enter a name for your new application: sqlite-example-application
Description: A JS/TS application to test SQLite storage
HTTP base: /
HTTP path: /...

The above command has automatically scaffolded the following application structure for us:

$ tree .
.
└── sqlite-example-application
    ├── README.md
    ├── package.json
    ├── spin.toml
    ├── src
    │   └── index.js
    └── webpack.config.js

Next, we move into our newly scaffolded application’s directory and use npm to take care of any package and dependency tasks (i.e. install/update webpack etc.):

$ cd sqlite-example-application 
$ npm install

To tell Spin that we want to use SQLite storage, we only need to grant SQLite permission to a component in the application’s manifest (the spin.toml file):

$ vi spin.toml

We simply need to add the following line inside the [[component]] section of our application’s spin.toml file:

sqlite_databases = ["default"]

After the above manifest change, the spin.toml file will look similar to the following:

spin_manifest_version = "1"
authors = ["Fermyon Engineering <engineering@fermyon.com>"]
description = "A JS/TS application to test SQLite storage"
name = "sqlite-example-application"
trigger = { type = "http", base = "/" }
version = "0.1.0"

[[component]]
id = "sqlite-example-application"
source = "target/sqlite-example-application.wasm"
exclude_files = ["**/node_modules"]
sqlite_databases = ["default"]
[component.trigger]
route = "/..."
[component.build]
command = "npm run build"

Please note you will need to spin build for the above Spin configuration to take effect:

$ spin build

Now that Spin is aware of the SQLite database, we can use the spin up command with the --sqlite option to pass SQL statements (create and populate a table) directly into the database. The --sqlite option (that spin up) provides is a great way to bootstrap your application’s database. Let’s create a table and add one row of data:

spin up --sqlite "CREATE TABLE IF NOT EXISTS character (id INTEGER PRIMARY KEY AUTOINCREMENT, firstname TEXT NOT NULL, lastname TEXT NOT NULL)" --sqlite "INSERT INTO character VALUES (NULL,'Lois','Lane')"

The above command will produce output similar to the following:

Storing default SQLite data to ".spin/sqlite_db.db"
Serving http://127.0.0.1:3000
Available Routes:
  sqlite-example-application: http://127.0.0.1:3000 (wildcard)

The above spin up command will run your Spin application. You may want to, as is the case now, press ctrl + c to exit spin up and stop your Spin application.

Once the application is stopped, we can show you how to add more data (using spin up and the --sqlite option again). This time by passing in a whole .sql file. After our data is added, we will update our application’s source code to enable our application to read the data. And finally, run and test the application.

A quick note on the --sqlite flag. You can pass --sqlite more than once, per spin command; the statements are run in the order you provide them, and Spin waits for each statement to complete before running the next. This is what we have done above (created the table and then added a new row of data). If required, you can even pass in whole .sql files using the --sqlite flag. (When passing in a whole file you must prefix the filename with @ e.g. spin up --sqlite @migration.sql.) For more information, please see the developer documentation.

Let’s pass in a whole .sql file (add another row):

$ spin up --sqlite @migration.sql
Storing default SQLite data to ".spin/sqlite_db.db"
Serving http://127.0.0.1:3000
Available Routes:
  sqlite-example-application: http://127.0.0.1:3000 (wildcard)

The .sql file in this case simply contains text that will add one more row to our existing table:

INSERT INTO character VALUES (NULL,'James','Bond');

With our database table created and populated (albeit with only two rows), using the following snippet of code in our application’s (src/index.js) file will allow us to read the data from our character table:

import {Sqlite} from "@fermyon/spin-sdk"
const encoder = new TextEncoder()

export async function handleRequest(request) {
    const conn = Sqlite.openDefault();
    const result = conn.execute("SELECT * FROM character;");
    const json = JSON.stringify(result.rows);
    
    return {
        status: 200,
        headers: { "foo": "bar" },
        body: encoder.encode(json).buffer
    }
}

With the source code in place, we can build (and run using the --up option):

$ spin build --up

Lastly, we test the application (you can make a request in your web browser, or use curl as we do below):

$ curl -i localhost:3000
HTTP/1.1 200 OK
foo: bar
content-length: 38
date: Tue, 15 Aug 2023 06:21:38 GMT

[[1,"Lois","Lane"],[2,"James","Bond"]]

What’s important to notice about this example is what is missing: There are no steps to install additional servers, provision a database, create users or credentials, or even a connection string with details about where the database lives. Those are all pieces of operational information. And this is a NoOps solution.

You are currently working with a local database running inside Spin.

Deploying to Cloud, NoOps-Style

When you deploy your Spin application to Fermyon Cloud, the cloud also implements these features, but in a very different way.

While spin creates a local SQLite instance to store your data (.spin/sqlite_db.db), a Fermyon Cloud deployment provisions and deploys an in-cloud Turso database that is fully managed, whether you are running your application on localhost (via spin up) or on Fermyon Cloud (via spin cloud deploy), not a line of your code needs to change. There are still no usernames to create and no connection strings to manage.

Thanks to the Wasm component model, which is what Spin uses, it is easy to provide platform-specific implementations of each of these services without requiring the developer to do even an iota more work.

Fermyon NoOps vs “Big Cloud Provider” Operations

To drive the point home, let’s roughly compare how provisioning a serverless application works between Spin + Fermyon Cloud and what we’ll just call Big Cloud Provider (BCP) functions. We’ll compare the operational tasks for an application using inbound HTTP, Key/Value storage, a custom domain (assuming you have registered the domain), a SSL certificate, and a database.

TaskSpinTimeBCPTime
Inbound HTTPDeclare in spin.toml< 1 min- Declare in application
- Set up load balancer or API gateway service
15 min
Key/ValueDeclare in spin.toml< 1 min- Declare in app?
- Provision a Key Value Storage service (like Redis)
- Create account(s) and credentials
- Configure permissions
- Supply SSL/TLS certificates
- Get endpoint
- Make sure credentials and endpoint are securely stored somewhere and can be securely injected into application
60 min
SQLite DBDeclare in spin.toml<1 min- Declare in application
- Provision a database-as-a-service
- Supply SSL/TLS certificates
- Create users
- Configure permissions
- Get endpoint
- Securely store and pass credentials
60 min
Custom Domain- After first deploy, add domain in Fermyon Cloud dashboard
- Verify domain
10 min- After deploy, get the IP of the deployed application
- Provision DNS records in domain management services
- Create records (A, CNAME, etc) linking IP to address
40 min
SSL/TLSAutomatic (with or without custom domain)0 min- Create or sign up to use a certificate authority
- Generate new certificate pair
- Have the certificate authority sign the new key
- Add the key material to the appropriate service (Load Balancer or application)
- Make sure you have secured the secret key
75 min

The table above suggests that a Spin application would take around 15 minutes to completely configure in Fermyon Cloud because there is almost no operational knowledge.

In contrast, setting up the BCP version might take as much as four hours — half of the average workday — to get that first deployment working.

And this is all without considering the ongoing operational time. Things like rotating keys, upgrading services, re-issuing SSL certificates and so on are all time-consuming. But NoOps systems like Fermyon Cloud and Spin mean that you don’t have to perform these service tasks at all. It’s part of the platform.

Conclusion

The first iteration of serverless introduced us to a new and wonderful way of writing applications. But the hidden cost came in portability, performance, and operational complexity.

At Fermyon, we’re dedicated to making this better. In Spin and Fermyon Cloud, we have implemented NoOps services that make it trivially easy for you to get straight to your application building without tons of local configuration and server operations. We believe that NoOps and Serverless are the perfect pair and we intend to create the most frictionless experience; where you spend less time performing service and maintenance and more time developing your applications. If you want to try this, head to the Spin QuickStart Guide. Or, if you’ve done that already, a wealth of examples can be found at the Spin Hub.

Be the First on Your Street to Get Your Hands on That NoOps SQL Goodness

Visit our NoOps SQLite Database page to get early access to Fermyon Cloud NoOps databases. You’ll need to complete a short sign-up form and follow the instructions - then Fermyon Cloud will provision and manage the Cloud database for you. You could even deploy the JavaScript example from above. This is a great opportunity to experience our NoOps SQLite database in Fermyon Cloud.

 

 

 


🔥 Recommended Posts


Quickstart Your Serveless Apps with Spin

Get Started