February 27, 2025

Stream Data from Linode Object Storage using the AWS S3 SDK

Thorsten Hans Thorsten Hans

spin s3 storage blobs linode

Stream Data from Linode Object Storage using the AWS S3 SDK

This tutorial walks you through building a Spin application in TypeScript that streams data from Linode Object Storage, an S3-compatible cloud storage service. A similar configuration should work for any S3-compatible service, including AWS S3, Azure Blob Storage, and others. This is useful for applications that need to serve large files efficiently, such as media streaming, log processing, or real-time data transformations at the edge. We will explore how to configure necessary Spin variables for accessing Linode Object Storage, set up routes to list and stream files, and create an additional route to apply a real-time transformation while streaming a file using the @aws-sdk/client-s3 package. Once we’ve built and tested the app locally, we’ll deploy it to Fermyon Cloud.

Prerequisites

To follow along the instructions illustrated as part of this tutorial, ensure you have the following installed on your machine:

  • The latest version of the spin CLI and its cloud plugin
  • A Fermyon Cloud account (We offer a free tier, allowing you to run up to five apps at no cost)
  • Node.js (version 22 or later)

We’ll use an existing Linode Object Storage for the sake of this tutorial. If you want to use your own instance, permissions for deploying a new Object Storage instance to a Linode account are required. To help kick things off, here’s a tutorial from Akamai on how to get started with Linode Object Storage.

Introduction

Linode Object Storage is an S3-compatible cloud storage service designed for storing and serving large amounts of unstructured data. We will use the @aws-sdk/client-s3 NPM package to interact with Linode’s Object Storage from within our Spin application.

The Spin application will expose three routes:

  • GET /files - Lists all files in a Linode Object Storage bucket.
  • GET /files/:name - Streams the contents of a specified file.
  • GET /transformed-files/:name - Streams the file’s contents while transforming text to uppercase.

Streaming Data from Linode Object Storage

Step 1: Set Up the Spin Application

Run the following command to create a new Spin application using the http-ts template and move into the application directory:

$ spin new -t http-ts -a linode-streaming-app

$ cd linode-streaming-app

Step 2: Install AWS S3 Client SDK

Install the @aws-sdk/client-s3 dependency using npm:

$ npm install @aws-sdk/client-s3

Step 3: Configure Spin Application Variables

First, let’s edit the application manifest (spin.toml) and introduce application variables to allow the application’s behavior to be modified without changing the actual source code. These variables define key settings for our S3 bucket, including the region, endpoint, name, access key, and more.

[variables]
region = { required = true }
endpoint = { required = true }
bucket_name = { required = true }
access_key_id = { required = true }
secret_access_key = { required = true, secret = true}

Having the application variables defined, we must update the component configuration and grant the component access to the desired variables. To do so, add a new table to spin.toml:

[component.linode-streaming-app.variables]
region = "{{ region }}"
endpoint = "https://{{ endpoint }}"
bucket_name = "{{ bucket_name }}"
access_key_id = "{{ access_key_id }}"
secret_access_key = "{{ secret_access_key }}"

Additionally, we must allow the linode-streaming-app component to send outbound network requests to our S3 bucket. Update the component configuration and set the allowed_outbound_hosts property as shown in the following snippet:

[component.linode-streaming-app]
# ...
allowed_outbound_hosts = ['https://{{ bucket_name }}.{{ endpoint }}]

With the updated application manifest in place, we can move on and start implementing the business logic of our Spin application.

Step 4: Implement the Spin Application

Our Spin application provides endpoints to list all files, stream individual files as-is, and apply a simple real-time transformation (converting file contents to uppercase) before streaming them back to the client. To follow along, replace the contents of src/index.ts with the TypeScript code shown in this section’s snippets.

import { AutoRouter, json } from 'itty-router';
import { S3Client, GetObjectCommand, ListObjectsV2Command } from '@aws-sdk/client-s3';
import { Variables } from '@fermyon/spin-sdk';

const dec = new TextDecoder();
const enc = new TextEncoder();

let router = AutoRouter();

// a custom config interface holding all configuration data
interface Config {
    region: string,
    endpoint: string,
    accessKeyId: string,
    secretAccessKey: string,
    bucketName: string
}

router
    .get("/files", (_, {config}) => listFiles(config))
    .get('/files/:name', ({ name }, {config}) => streamFile(name, config))
    .get("/transformed-files/:name", ({ name }, {config}) => streamAndTransformFile(name, config));

//@ts-ignore
addEventListener('fetch', async (event: FetchEvent) => {

	// load application variables
    const endpoint = Variables.get("endpoint");
    const accessKeyId = Variables.get("access_key_id");
    const secretAccessKey = Variables.get("secret_access_key");
    const bucketName = Variables.get("bucket_name");
    const region = Variables.get("region");

    // if any variable is not specified or empty, terminate and send a HTTP 500
    if (!endpoint || !accessKeyId || !secretAccessKey || !bucketName || !region) {
        return new Response("Application not configured correctly", { status: 500 });
    }

    // Pass the Configuration to the Router
    event.respondWith(router.fetch(event.request, {
        config: {
            endpoint,
            accessKeyId,
            secretAccessKey,
            bucketName,
            region
        } as Config
    }));
});

The listFiles function is responsible for loading a list of all files stored in the S3 bucket and returning them as JSON array:

const listFiles = async (config: Config): Promise<Response> => {
    // construct a new S3 client using configuration data
    const s3 = new S3Client({
        region: config.region,
        endpoint: config.endpoint,
        credentials: {
            accessKeyId: config.accessKeyId,
            secretAccessKey: config.secretAccessKey,
        }
    });
    try {
		const input = { Bucket: config.bucketName };
		// load metadata of all files in our S3 bucket
        const { Contents } = await s3.send(new ListObjectsV2Command(input));
		// grab all files names, fallback to an empty array
        const files = Contents?.map((file) => file.Key) || [];
		// return list of files as JSON
        return json({ files });
    } catch (error) {
        console.log(error);
        return new Response(JSON.stringify(error), { status: 500 })
    }
}

Next, let’s add the streamFile function, which is responsible for stream a particular file from the S3 bucket as it is:

const streamFile = async (name: string, config: Config): Promise<Response> => {
    // create a S3 client instance
    const s3 = new S3Client({
        region: config.region,
        endpoint: config.endpoint,
        credentials: {
            accessKeyId: config.accessKeyId,
            secretAccessKey: config.secretAccessKey,
        }
    });

    try {
		// construct command input for receiving the desired file
        const input = { Bucket: config.bucketName, Key: name };
		// request the desired file
        const { Body } = await s3.send(new GetObjectCommand(input));
		// pipe the file contents to the response
        return new Response(Body as ReadableStream, {
            status: 200,
        });

    } catch (error: any) {
        return new Response(`error : ${error.message}`, { status: 500 });
    }
}

Finally, let’s add the streamAndTransformFile function, which is the last handler of our application. In contrast to the previous handler function, this will define and apply a TransformStream to convert the entire contents of a particular file to be uppercased:

const streamAndTransformFile = async (name: string, config: Config): Promise<Response> => {

	// define the transform operation
    const upperCaseTransform = new TransformStream({
        transform(chunk, controller) {
			// decode the byte array using TextDecoder
            const txt = dec.decode(chunk, { stream: true });
			// apply transformation and encode the transformed chunk again
            controller.enqueue(enc.encode(txt.toUpperCase()));
        }
    });

    const s3 = new S3Client({
        region: config.region,
        endpoint: config.endpoint,
        credentials: {
            accessKeyId: config.accessKeyId,
            secretAccessKey: config.secretAccessKey,
        }
    });

    try {
        const input = { Bucket: config.bucketName, Key: name };
        const { Body } = await s3.send(new GetObjectCommand(input));
		// pipe the file contents through the custom transformation
        const transformed = (Body as ReadableStream).pipeThrough(upperCaseTransform);
		// pipe the transformed stream to the response
        return new Response(transformed, {
            status: 200,
        });
    } catch (error: any) {
        return new Response(`error : ${error.message}`, { status: 500 });
    }
}

Step 5: Compiling and Running the Spin Application

With implementation finished, we can use spin build to compile our source code down to WebAssembly and spin up to run it on our local machine.

As we’ve marked all our variables as required, we must specify them before running the application. Although there are different ways for achieving this, we’ll simply export all necessary variables before invoking spin up using the SPIN_VARIABLE prefix:

NOTE: As mentioned at the beginning of this tutorial, we’ll use a preexisting S3 bucket. (The access key generated for this tutorial has ReadOnly permissions). If you want to use your own instance of Linode Object Storage, provide your individual values when setting the application variables in the upcoming snippet.

$ spin build

$ export SPIN_VARIABLE_REGION=se
$ export SPIN_VARIABLE_ENDPOINT=se-sto-1.linodeobjects.com
$ export SPIN_VARIABLE_ACCESS_KEY_ID=XV0M33AAM5KXADAXHUAA
$ export SPIN_VARIABLE_SECRET_ACCESS_KEY=ujdC5v7f6TqfFMnjXiUpoY9uWTCYBZaJ12YV4eeX
$ export SPIN_VARIABLE_BUCKET_NAME=fermyon-blog-bucket

$ spin up

Executing spin up should generate a output similar to the following, indicating that our application is now served on http://localhost:3000:

Logging component stdio to ".spin/logs/"

Serving http://127.0.0.1:3000
Available Routes:
  linode-streaming-app: http://127.0.0.1:3000 (wildcard)

Step 6: Test the Endpoints

Let’s use curl to test the different endpoints exposed by our Spin application:

List files:

curl http://127.0.0.1:3000/files
{"files":["large.txt","large2.txt","small.txt","tiny.txt"]}

Get a specific file:

curl http://127.0.0.1:3000/file/tiny.txt
lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod 
tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua.

Get a transformed file (uppercase text):

curl http://127.0.0.1:3000/transformed-file/tiny.txt
LOREM IPSUM DOLOR SIT AMET, CONSETETUR SADIPSCING ELITR, SED DIAM NONUMY EIRMOD
TEMPOR INVIDUNT UT LABORE ET DOLORE MAGNA ALIQUYAM ERAT, SED DIAM VOLUPTUA.

Step 7: Deploying to Fermyon Cloud

Having the application successfully tested on our local machine, we’ll use the spin cloud deploy command to deploy the application to Fermyon Cloud.

NOTE: As mentioned at the beginning of this tutorial, we’ll use a preexisting S3 bucket. (The access key generated for this tutorial has ReadOnly permissions). If you want to use your own instance of Linode Object Storage, provide your individual values when setting the application variables in the upcoming snippet. Note that we configure our variables in a different format to Fermyon Cloud - learn how with this short tutorial.

$ spin cloud deploy \
  --variable region=se \
  --variable endpoint=se-sto-1.linodeobjects.com \
  --variable access_key_id=XV0M33AAM5KXADAXHUAA \
  --variable secret_access_key=ujdC5v7f6TqfFMnjXiUpoY9uWTCYBZaJ12YV4eeX \
  --variable bucket_name=fermyon-blog-bucket

Deployment to Fermyon Cloud will take a couple of seconds, once the deployment is finished, you should be presented with an output similar to this:

Uploading linode-streaming-app version 0.1.0 to Fermyon Cloud...
Deploying...
Waiting for application to become ready......... ready

View application:   https://linode-streaming-app-hedpawu5.fermyon.app/
Manage application: https://cloud.fermyon.com/app/linode-streaming-app

Grab the application URL from the show output and send corresponding HTTP requests to the endpoints now exposed by the Spin application running on Fermyon Cloud.

Conclusion

In this tutorial, we’ve successfully walked through creating a Spin application to stream data from Linode Object Storage using the AWS S3 SDK. By setting up the required configurations, implementing routes to list and stream files, and applying real-time transformations, you can easily manage and transform files in the cloud.

After testing the app locally, we also explored how to deploy it on Fermyon Cloud, bringing your application to production. With these steps, you’re equipped to build and run serverless applications that efficiently use S3-compatible storage services like Linode Object Storage.

 

 

 


🔥 Recommended Posts


Quickstart Your Serverless Apps with Spin

Get Started