What I Learned Putting AI Agents in Production
Lessons from building an AI-powered product — function calling, prompt management, and the gap between demos and production systems.
The case for compiling your web app — HTML, CSS, API, and all — into a single executable with zero runtime dependencies.
Written By Alinus Dumitrana
Feb 10, 2026 • 3 min read
There's a trend in modern web development toward more moving parts. A typical deployment might involve a Node.js API server, a separate React build, an Nginx config to serve static files, a process manager, and a Docker container to bundle it all together. Each piece works fine individually. Together, they create a surface area for things to go wrong.
I've been experimenting with the opposite approach: compile everything — HTML templates, CSS, JavaScript, the API server — into a single binary. Drop it on a machine, run it, done.
For Snippy, my snippet manager, the entire application is one Go binary. The web UI, the API, the static assets, and the SQLite database driver are all compiled in. You download a ~10MB file and run it. No npm install, no config files, no runtime dependencies.
For this website (Atlas), it's a Rust binary. Templates are compiled by Askama at build time, content is loaded from Markdown files at startup, and the CSS is pre-built and included in the Docker image. The running application is a single process serving everything.
Both languages compile to native binaries with no runtime. But they approach embedding differently:
Go has embed.FS in the standard library since Go 1.16:
//go:embed templates/* static/* var content embed.FS
Everything in those directories becomes part of the binary. At runtime, you read from content the same way you'd read from the filesystem. The API is identical — your code doesn't know or care whether files are embedded or on disk.
Rust with Askama takes it further — templates aren't just embedded, they're compiled into Rust code. A typo in a template variable is a compile error, not a runtime 500.
Deployment is trivial. Copy one file, run it. No dependency conflicts, no missing runtime versions, no "works on my machine." CI builds a binary, pushes it somewhere, the server pulls and runs it.
Fewer failure modes. There's no web server to misconfigure, no static file path to get wrong, no process manager to crash separately from your app. The binary either starts or it doesn't.
Fast startup. A Go or Rust binary starts in milliseconds. There's no interpreter to load, no JIT to warm up, no node_modules to parse. This matters for containers that scale to zero, for CLI tools that run frequently, and for development iteration speed.
Reproducibility. The binary you test locally is the same binary that runs in production. Not "the same code with different dependencies installed" — the same bytes.
It's not free:
Compile times. Rust in particular can be slow to compile. My site takes a few minutes for a release build. Go is much faster, but still slower than "save and refresh" with a interpreted language.
Development workflow. You lose the instant feedback of hot module replacement. For this site, I use cargo watch to recompile on changes, and content reloads on each request in development mode. It's good enough, but not as fast as Vite.
Not everything should be embedded. User-uploaded content, frequently changing configuration, and large media files don't belong in a binary. The pattern works best when you can draw a clear line between "application code" (embedded) and "user data" (external).
This approach works well for:
It doesn't make sense for large applications with many developers, where the compile-time cost outweighs the deployment simplicity. A Next.js app with 200 routes and a team of 15 developers shouldn't be fighting with compile times to save a few lines of Dockerfile.
The single-binary approach isn't about dogma. It's about asking: how simple can this deployment be? If the answer is "one file," and the tradeoffs are acceptable, that's fewer things to break and fewer things to maintain. For personal projects and small tools, that tradeoff almost always wins.
Lessons from building an AI-powered product — function calling, prompt management, and the gap between demos and production systems.
Working inside a real microfrontend architecture — what it's actually like when Next.js, Angular, and React microfrontends coexist in production.