Porting InstaBeach to Rust: A 10-Year-Old Play App Goes Serverless
The Japanese yen’s exchange rate took a nosedive this year, and my monthly cloud costs for running a tiny beach recommendation site ballooned from “I can ignore this” to “this is getting silly”. instabeach.io1, a site I built a decade ago to help my wife and me find beach destinations based on weather and travel dates, was running on a Kubernetes cluster that started to cost more and more per month. Time for a change.
Overview
- The origin story
- The deployment journey
- Why Rust and Lambda?
- Engineering decisions and tradeoffs
- Results and reflections
The origin story
Before having kids and the pandemic, my wife and I loved going on beach holidays every year. The problem was finding the perfect beach for us given a specific timeframe, our preferred temperature range, and some rough understanding of wet and dry seasons. Searching through travel blogs and weather sites was tedious, so I did what any reasonable software engineer would do: I spent far more time building a solution than I would have spent doing the manual research.
I wrote InstaBeach in Scala using the Play Framework because I genuinely love the language. Play made it easy to build a clean MVC application with reasonable performance. The site worked well, and I was happy with it.
The deployment journey
The app’s deployment history mirrors the evolution of cloud platforms and my own curiosity:
- Heroku (2015-2018): Initially deployed to Heroku because it was free and dead simple. Git push, and you’re done. This was perfect for a side project that maybe five people used (mostly me).
- Docker Swarm on AWS (2018-2020): Around when Heroku killed their free tier, I moved to a 3-node Docker Swarm cluster on AWS. This didn’t save me any money, but I got more compute and control for the same-ish cost, though it required more maintenance. Still, it was cheap enough that I didn’t think about it much, especially since I already was running several side projects at this time on it.
- GCP Kubernetes (2020-2025): I wanted to learn k8s properly, so I migrated to GKE, which was cheap-ish at the time. Plus it allowed me to avoid thinking about how to maintain my Docker Swarm cluster (which had issues I found out about over time…)
Then the ¥ 📉
Then 2022 happened. The Japanese yen went from around ¥110 to the dollar to ¥150+2. For a side project that generates zero revenue and gets minimal traffic, this got absurd. I waited for a while to see if it would go back, but today it still sits at around ¥147.
Plus, I wasn’t maintaining the k8s cluster well anyway. Security patches were piling up, I had outdated Scala dependencies, and the whole thing felt like technical debt that was actively costing me money. Plus, this year I finally allowed myself to upgrade from my iPhone X since it was no longer getting OS nor proper app updates, so I decided to rethink the architecture and try to save some $$.
Why Rust and Lambda?
I wanted to move to AWS Lambda for one simple reason: the generous free tier3. Lambda gives you 1 million free requests per month and 400,000 GB-seconds of compute time. For a low-traffic site like InstaBeach, this meant potentially zero ongoing costs.
But Lambda has a problem: cold starts. When a function hasn’t been invoked recently, AWS has to spin up a new environment, load your code, and initialise the runtime. This can add hundreds of milliseconds or even seconds to your response time4.
Rust solves this. Compiled Rust binaries have the fastest cold start times of any Lambda runtime5, typically 20-50ms. Combined with excellent runtime performance, Rust lets you maximise the free tier while keeping response times snappy. Plus, it doesn’t hurt that I’ve been a fan for years, and Rust is web.
Engineering decisions and tradeoffs
Porting InstaBeach to Rust wasn’t just a matter of translation. I made several architectural decisions to optimise for Lambda’s constraints and the free tier.
Cargo Lambda: one web app, not many functions
Instead of splitting the application into multiple Lambda functions (one per endpoint), I used cargo lambda6 to deploy a single Axum web application. This lets me write normal web application code with routing, middleware, and all the conveniences of a web framework, while still deploying to Lambda.
Here’s how the main entry point looks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
The create_app()
function returns an Axum router with all the endpoints configured. Cargo Lambda handles the integration with the Lambda runtime, and I can test the whole thing locally with cargo lambda watch
on port 9000.
No database: CSV-driven codegen
Databases add complexity and cost. Even serverless options like DynamoDB have charges for read/write capacity. For InstaBeach, the data is essentially static: beach destinations, their locations, temperature ranges, and wet/dry seasons.
Instead of a database, I use a CSV file as the source of truth. At build time, a code generation script reads the CSV and generates Rust structs with all the destination data embedded directly into the binary. This gives me an in-memory “database” with zero runtime I/O and zero ongoing costs.
From a CSV like this
1 2 |
|
Here’s a snippet from the generated destination data:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
The entire dataset compiles into the binary, and lookups are just array iterations or hash map lookups in memory. Fast, simple, and free.
rust-embed: no S3, no problems
Most serverless applications store static assets (CSS, JavaScript, images) in S3 and serve them through some CDN. This makes sense for large applications, but adds complexity and cost.
Instead, I use rust-embed to embed all static assets directly into the binary at compile time. Here’s how the asset handler looks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
In release builds, Assets::get()
returns the file contents from the embedded binary. In development, it reads from the filesystem. This saves S3 costs and reduces the number of moving parts in production.
The tradeoff is a larger binary size. The InstaBeach binary is about 93MB, which exceeds Lambda’s 50MB direct upload limit. But that’s fine—we can upload via S3 during deployment, and the binary still fits well within Lambda’s 250MB limit.
Heavy caching and rate limiting at the CDN
To minimise Lambda invocations and prevent abuse, I rely heavily on Cloudflare caching and rate limiting (also free). Static assets are cached for a year (they’re immutable anyway), and API responses are cached for 5 minutes.
This means most requests never hit Lambda at all. They’re served directly from Cloudflare’s edge locations, which is both faster for users and cheaper for me.
Terraform and terraform-local for deployment
Infrastructure is managed with Terraform. The Terraform configuration defines the Lambda function, IAM roles, Cloudflare configuration, and DNS records.
For local testing, I use tflocal with LocalStack. This lets me deploy and test the entire infrastructure locally without touching AWS. It’s brilliant for fast iteration:
1 2 |
|
The Terraform module for the Lambda function handles building the Rust binary, uploading to S3, and creating the function using aws_lambda_function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
Nothing crazy, and most of the code is visible at miniatu-rs.
The compilation time tradeoff
The big disadvantage of this approach is compilation time. A release build of the Rust binary takes over 30 minutes on my machine 7. This is painful during development, but the tradeoff is worth it:
- Zero ongoing costs (within the free tier)
- Fast cold starts (<50 ms)
- No database or S3 to manage
- Simple deployment and rollback
During development, I use cargo lambda watch
for fast iteration with debug builds, which compile in under a minute. Release builds are only for testing in LocalStack and actual production deployments.
Results and reflections
The migration was successful. InstaBeach now runs entirely within AWS Lambda’s free tier, costing me nothing per month. Cold starts are fast enough that I highly doubt anyone can notice, and it feels snappier to me™.
The Rust ecosystem for web development has matured significantly. Axum is a joy to work with, cargo lambda makes Lambda deployment trivial, and crates like rust-embed and Askama (for compile-time templates) provide excellent ergonomics.
Would I recommend this approach for every application? Probably not. If you have a database, frequent deployments, or need sub-X
ms cold starts, you might need a different architecture. But for small, static-ish applications with low traffic, the combination of Rust and Lambda’s free tier is hard to beat.
And my monthly cloud bill? Zero yen, zero pounds, zero dollars. Just the way I like it.
Footnotes
-
It’s really just a very simple calculator for finding beach locations.↩
-
And yes, I know that other countries have had it worse. But just look at this USD to JPY Exchange Rate History↩
-
AWS says Lambda will always have a free tier (“This always free service is on the Free and Paid plan.”) AWS Lambda↩
-
Quite a lot goes into AWS Lambda cold starts, but a high level understanding of the lifecycle can be found at Understanding AWS Lambda Cold Starts↩
-
Rust is number 1 according to AWS Lambda Cold Start Performance Comparison↩
-
I already used it and loved it from a previous project, miniatu-rs↩
-
There might be multiple reasons why the compile is slow, including: me cross compiling for ARM on my old x86 Mac, using rust-embed, and cargo lambda’s default release build settings like
codegen-units=1
↩