Building a social app with Spin (4/4): Key-Value storage and Fermyon Cloud
Justin Pflueger
social
spin
wasm
key-value
github
workflow
Hello Fermyon friends! Justin Pflueger here with part 4 of our blog series ‘Building a social app with Spin’. In this post we’re going to use the new Key-Value storage feature and deploy our application to Fermyon Cloud using a GitHub workflow.
If you haven’t yet, make sure to check out previous posts in this series where we cover the setup, authentication and persistence of data for this application:
Today we’ll be taking a look at integrating the new Key-Value feature into our application for two use cases. The first use case will be adding caching for the authentication signing key. Currently our application will make an outbound HTTP request for the JWKS used to verify the signing of an authentication token. The second use case will be providing configuration for our application that works in Fermyon Cloud. While we do have support for dynamic configuration if you are locally running Spin applications the engineering team is still working on implementing dynamic configuration in Fermyon Cloud. Once we integrate Key-Value, we’ll be able to deploy our application using a GitHub action and see everything working end-to-end. Exciting stuff!
Caching the Token signing keys
Before we go about caching the JSON Web Key Set (a.k.a. JWKS) we need to understand the trade-offs of doing so. Any time we can eliminate an outbound HTTP request is going to be a performance improvement for our API endpoints. During the authorization flow, Auth0 verifies that the user is authenticated through GitHub and then uses the private key of an RS256 key-pair to encrypt the claims of the authenticated user. This is called asymmetric encryption and is a perfect fit for our use case where the client application is public and therefore can’t safely access a secret. As you might have guessed from the name, the JWKS is a set of public keys whose private key counterparts were used to sign the JSON Web Tokens (a.k.a. JWT) issued by Auth0 during the OAuth2 authorization flow. These keys are periodically rotated for security hygiene which means that we can’t just naively cache these keys forever because they might change and our API would start refusing all requests with invalid token signatures. We can work with this by adding a time-to-live (or TTL) to our cached version of the signing keys and checking that before we used the cached JWKS.
Let’s start off with the profile component that’s written in Rust by adding a function that populates the cache and from there we should be able to easily understand how to read from the cache. Since this caching operation is not a critical function of our application, we can be pretty loose with our error handling here as long as we log enough information to debug when the cache isn’t working as expected.
fn set_cached_jwks(store: &spin_sdk::key_value::Store, jwks: bytes::Bytes) -> Result<()> {
let expiry = Utc::now() + chrono::Duration::minutes(5);
let expiry = expiry.timestamp_millis().to_le_bytes();
store.set("jwks_ttl", expiry)?;
store.set("jwks", jwks)?;
Ok(())
}
The first thing we need to do is generate our TTL. The longer we wait before re-populating the cache, the higher the chance that our application encounters an error while trying to verify a token. Let’s start off with 5 minutes and we can adjust it later. After we convert the timestamp to bytes, it’s a pretty simple API to save both the TTL and JWKS in the Key-Value store. Let’s move on to retrieving from the cache. We’ll add a complementary function that takes a Key-Value Store object as a parameter and returns a result object that encapsulates either a successful cache hit or an error if no cached key is found.
fn get_cached_jwks(store: &spin_sdk::key_value::Store) -> Result<bytes::Bytes> {
let expiry = match store.get("jwks_ttl") {
Ok(expiry) => expiry,
Err(_) => {
return Err(anyhow::anyhow!("No cached JWKS found."));
}
};
let expiry = match expiry.try_into() {
Ok(expiry) => expiry,
Err(_) => {
return Err(anyhow::anyhow!("Cached JWKS has invalid expiry."));
}
};
let expiry = match Utc.timestamp_millis_opt(i64::from_le_bytes(expiry)) {
LocalResult::Single(expiry) => expiry,
_ => {
return Err(anyhow::anyhow!("Cached JWKS has invalid expiry."));
}
};
if expiry <= Utc::now() {
return Err(anyhow::anyhow!("Cached JWKS has expired."));
}
match store.get("jwks") {
Ok(jwks) => Ok(bytes::Bytes::from(jwks)),
Err(_) => {
return Err(anyhow::anyhow!("No cached JWKS found."));
}
}
}
This function has a few more steps in decoding the TTL from bytes into a timestamp so I am a little more verbose with the error handling here in case we need to debug it later. Once the TTL is decoded all that’s left to do is to check if it’s still valid and return the value for the JWKS if it is. Now we just need to fix the plumbing by passing a Key-Value store object through to these functions and insert calls to our functions where we are already using outbound HTTP to fetch the JWKS.
impl JsonWebKeySet {
pub fn get(url: String, store: &spin_sdk::key_value::Store) -> Result<Self> {
let jwks_bytes = match get_cached_jwks(&store) {
Ok(jwks) => jwks,
Err(cache_err) => {
println!("Error getting cached JWKS: {}", cache_err);
let req_body = http::Request::builder()
.method("GET")
.uri(&url)
.body(None)?;
let res = match outbound_http::send_request(req_body) {
Ok(res) => res,
Err(e) => {
println!("Error getting JWKS from url {}: {}", &url, e);
return Err(e.into());
}
};
let res_body = match res.body().as_ref() {
Some(bytes) => bytes.slice(..),
None => {
return Err(anyhow::anyhow!(format!(
"Error getting JWKS from url {}: no body",
&url
)));
}
};
if let Err(e) = set_cached_jwks(&store, res_body.clone()) {
println!("Error caching JWKS: {}", e);
}
res_body
}
};
Ok(serde_json::from_slice::<JsonWebKeySet>(&jwks_bytes)?)
}
...
This part might be easier to visualize as a diff in the pull request, but I’ve added the code here for clarity as well. It’s a pretty simple cached operation where we check for the existence of the cached value and use it if we find it. If there is no cached value or it expired, we make the outbound HTTP request and cache the response. Don’t forget to add the following to your spin.toml
if you haven’t already; it’s how Spin knows that a component is allowed to access a key-value store.
[[component]]
id = "profile-api"
...
+ key_value_stores = ["default"]
I also implemented similar caching functions in Go. Since we’ve already covered the basics of what we’re trying to accomplish I’ll gloss over this part for brevity.
func getCachedJwks() (*keyfunc.JWKS, error) {
if data, err := key_value.Get(defStore, "jwks_ttl"); err != nil {
return nil, fmt.Errorf("Failed to get jwks_ttl from store: %v", err)
} else {
jwks_ttl := int64(binary.LittleEndian.Uint64(data))
if jwks_ttl > time.Now().UTC().Unix() {
if data, err := key_value.Get(defStore, "jwks"); err != nil {
return nil, fmt.Errorf("Failed to get jwks from store: %v", err)
} else {
if jwks, err := keyfunc.NewJSON(data); err != nil {
return nil, fmt.Errorf("Failed to parse jwks: %v", err)
} else {
return jwks, nil
}
}
} else {
return nil, fmt.Errorf("jwks is expired")
}
}
}
func setCachedJwks(jwks *keyfunc.JWKS) {
data, err := json.Marshal(jwks)
if err != nil {
fmt.Println("Failed to marshal jwks: ", err)
return
}
err = key_value.Set(defStore, "jwks", data)
if err != nil {
fmt.Println("Failed to set jwks in store: ", err)
return
}
// set the ttl for the jwks key
jwks_ttl := uint64(time.Now().UTC().Add(24 * time.Hour).Unix())
jwks_data := make([]byte, 8)
binary.LittleEndian.PutUint64(jwks_data, jwks_ttl)
if err := key_value.Set(defStore, "jwks_ttl", jwks_data); err != nil {
fmt.Println("Failed to set jwks_ttl in store: ", err)
}
}
The only real difference here is that I decided to let the cached key “live” longer because I’m really the only person using this application so I’m okay with refreshing the cache once every 24 hours. Really it’s up to you to decide how long you want this cache to live as long as you understand that the longer the cached key lives, the higher the chance that your user will experience an invalid token signature. Luckily my tolerance for errors is pretty high, at least when it’s code that I write 😁.
Configuring the App
Let’s explore our second use-case which is using key-value as a way to configure our application. This may seem odd at first so let me explain. While Spin has support for dynamic runtime configuration, our engineering team is still working to bring that feature to Fermyon Cloud. But as part of the new key-value feature you might notice the new argument to spin deploy
named --key-value
where you can supply one or more key-values that are set as part of the deployment process. Until Fermyon Cloud supports dynamic runtime configuration we’ll use this as a stop-gap solution for configuring our app during deployment.
All that we need to do to support this is adjust how we read our configuration values. I still want the ability to change values during local development so if we can’t find the configuration value in key-value storage, we’ll fall back to how we were already reading the configuration.
impl Config {
fn try_get_value(key: &str, store: &key_value::Store) -> Result<String> {
// first try to get the value from key-value store
store
.get(key)
.map(|b| String::from_utf8(b).unwrap())
// then try to get the value from the config file
.or_else(|_| config::get(key))
// then try to get the value from the environment
.or_else(|_| std::env::var(key))
.context(format!(
"Failed to get configuration value for key '{}'",
key
))
}
...
Hopefully it’s pretty plain to see that we first try to read from key-value, then dynamic runtime configuration and finally the environment variable. While I don’t currently use environment variables in my application I thought it would be nice to support in case I want to switch in the future. The implementation is pretty much the same in our Go component.
func configGetRequired(store key_value.Store, key string) string {
if val, err := key_value.Get(store, key); err == nil {
return string(val)
}
if val, err := config.Get(key); err == nil {
return val
}
if val, ok := os.LookupEnv(key); ok {
return val
}
panic(fmt.Sprintf("Missing required config value: %v", key))
}
Deploying the App to Fermyon Cloud
Finally, the moment of truth! We’ve reached enough functionality that I’m ready to deploy the application to Fermyon Cloud 🎉. We have a few dependencies for our GitHub workflow so we’ll need to add actions to setup rust, setup Go, setup TinyGo and setup Spin. I’ll skip over most of the setup actions because they are all pretty standard except for two: setting up TinyGo and setting up Spin.
In a recent merge to TinyGo, someone implemented the reflect
package for TinyGo. If you recall from blog post #3, we had to do some funky workarounds to implement JSON marshalling but with this change we can finally use the built-in encoding/json
package. However this does mean that I’ve changed my local environment to use a development branch of TinyGo and our GitHub workflow will need to reflect that dependency upgrade. I was able to get this working in my GitHub workflow by pulling a dev container from GitHub container registry and copying the source tree to the GitHub action runner (with caching of course). Once TinyGo releases version 0.28.0, we’ll be able to replace this workaround with one of the TinyGo setup actions that already exist. Until that day, let’s take a look at this workaround:
...
- name: Cache tinygo
id: cache-tinygo
uses: actions/cache@v3
env:
cache-name: cache-tinygo
with:
path: ${{ github.workspace }}/tinygo
key: tinygo-dev-${{ env.tinygo-dev-image-tag }}
# download tinygo from dev docker container to use new 'reflect' and 'encoding/json' features
- name: Download tinygo
if: ${{ steps.cache-tinygo.outputs.cache-hit != 'true' }}
env:
IMAGE_TAG: ghcr.io/tinygo-org/tinygo-dev:${{ env.tinygo-dev-image-tag }}
run: |
# download tinygo from 'dev' branch docker container
docker pull ${IMAGE_TAG}
CONTAINER=$(docker create --platform=linux/amd64 ${IMAGE_TAG})
docker cp ${CONTAINER}:/tinygo/ $GITHUB_WORKSPACE
docker rm -v ${CONTAINER}
- name: Setup tinygo
run: |
# set the tinygo root path
echo "TINYGOROOT=$GITHUB_WORKSPACE/tinygo" >> $GITHUB_ENV
# add tinygo to gopath
GOPATH=$(go env GOPATH)
cp $GITHUB_WORKSPACE/tinygo/build/* $(go env GOPATH)/bin
# debug home paths
ls -al $GITHUB_WORKSPACE
env
...
Luckily setting up Spin has been made much easier! Our very own Rajat Jindal has built GitHub actions that makes using Spin in GitHub workflows exceptionally easy.
- name: Setup spin
uses: fermyon/actions/spin/setup@v1
with:
version: canary
Just like that we have Spin available to use in our Github workflow. Let’s take a look at how we deploy to Fermyon Cloud using key-value as a way to configure our application:
- name: Build & Deploy
run: |
spin login --token ${{ secrets.FERMYON_CLOUD_TOKEN }}
spin build
spin deploy \
--key-value "db_url=${{ secrets.PGCONNSTR }}" \
--key-value "auth_domain=${{ vars.AUTH_DOMAIN }}" \
--key-value "auth_audience=${{ vars.AUTH_AUDIENCE }}"
While there is a Spin action to perform the build and deploy for us, it doesn’t (yet) support supplying key-values as parameters. Though that work is currently in PR (see below). No worries, we can still use the command line to run the deployment for us. The only requirement for us is that we generate a token from the Fermyon Cloud UI and set it as a repository secret for our workflow to use.
Just like that, code-things is live at https://code-things-irrml4js.fermyon.app!
Summary
If you’re still reading then congratulations because you’ve made it all the way to the end of our journey! I hope you have found this information as useful to read as I’ve found it to write. I’ll continue to add to this application so that you all can refer back to it as a way to accomplish portions of whatever it is you’re building. Thanks so much for sticking with me on this journey! As part of the Customer Success team at Fermyon, Chris Matteson and I are always here to talk with you about your use case and how we can help you bring it to fruition. Feel free to book time with us on Calendly, reach out via Discord or submit an issue to the code-things repository on Github.