Building the New Tribot Website
Building the New Tribot Website
If you’ve been around Tribot for a while, you probably remember a post exactly like this one on our forums. A few years ago, we created a new website for Tribot to replace the one from 2013. And now we’ve replaced it again.
To be fair, the new website here isn’t nearly as dramatic of a change as the replacement in 2023. But I wanted to share some of the development processes behind the decision to rebuild the site again and how we approached it.
The Rise of AI
Love it or hate it, it’s no secret that AI is becoming an essential development tool. As models improve, tooling improves, and we as developers improve our ability to use AI without producing slop, the economics of technical decisions are changing.
I tend to mostly use a combination of Claude Code (Opus models) and Jetbrains AI Assistant.
Rust on the backend
I’ve always liked the idea of developing in Rust. It has modern features, a great build system, compiles natively, and has no garbage collector, making it incredibly flexible and performant.
But there are always the typical complaints:
- Slow build times
- Rough IDE support
- Unintuitive syntax rules
And those are legitimate issues. It doesn’t matter how good your backend will be if it never compiles because you can’t figure out why the borrow checker is complaining about your code.
But AI makes up for these issues quite well. Nowadays, it’s pretty rare for AI to finish with code that doesn’t at least compile, and since Rust tends to heavily favor compiler errors over runtime errors, the feedback loop to AI is solid. I think Rust is a huge winner in the AI world; if you’re not manually writing it, why choose a language that prioritizes developer comfort over performance and safety?
Tribot’s backend code is now all Rust. It’s what handles all the heavy lifting for the website, all the server communication from the tribot client, and runs all the background jobs. By switching to Rust, we were able to cut cost on the backend significantly and improve stability.
The Rust Web Tech Stack
- Axum - For async http request handling / routing
- Tokio - The async runtime
- Serde - For JSON serialization
- SeaORM - Type-safe ORM for our postgres database
These libraries made development a breeze. Rust isn’t the most straightforward language but the surrounding community is excellent. These libraries make the backend code structure pretty much exactly what you’d expect from a modern web app.
One pattern I like in our backend is how we handle errors. Every API error has two messages: a friendly_message that gets sent to the client
(“Something went wrong with your order”) and a source_err_msg that gets written to server logs with the full context (the actual SQL constraint
violation, the upstream API timeout, whatever). Rust’s type system enforces this separation at compile time - you literally can’t return an error
without deciding what the user should see versus what stays internal. It’s a small thing, but it means we never accidentally leak database errors
to the frontend, which is the kind of mistake that’s easy to make in more dynamic languages and tedious to enforce through code review.
A new frontend
AI is shockingly good at frontend development. Yes, I know, some of the website still has that signature “made by AI” touch (purple, gradients, etc). However, AI is improving and my ability to inject design decisions is as well, so I plan on doing another pass to make sure the website is less generic and more “Tribot”.
But even so, you can’t deny that this is an improvement. Dark mode, better controls, better SEO, and a solid base that actually lets me add features.
Let’s talk about some of the decisions I made, though, for our frontend.
SvelteKit
SvelteKit is probably the first frontend framework that has actually clicked for me (ironic under a header about AI use). It’s straightforward and doesn’t try to hide the complexity of the web platform.
I combined this with Tailwind CSS for a frontend tech stack that I could easily learn and jump in and out of.
It has all the features of our old Next.js setup, so I didn’t have to radically change the way we go about page loading, server side code, etc.
Serverless Hosting on Cloudflare
Over the past few years, Cloudflare has released some unique platform offerings. They can natively host sveltekit apps, supporting all the features we need.
By running the frontend code on Cloudflare, our site loads faster and is overall more reliable. And since the actual backend is in Rust on a server doing the heavy lifting, having the frontend on Cloudflare isn’t racking up the extra cost that typically comes from serverless architectures.
Kubernetes Out
Hot take: I still like k8s. It provides out-of-the-box zero-downtime deployments, rolling updates, secrets/config management, and a straightforward way to deploy multiple apps.
If you have a good technical baseline and a bit of bravery to not be scared of the k8s complexity horror stories, you might find that k8s is actually a time saver for small, but critical projects.
However, the benefits I got from k8s were largely wiped out by some other decisions:
- Agentic programming is great for setting up automatic zero-downtime deployments on regular servers
- The rust backend is less prone to crashes and the need to emergency scale
- The frontend has been moved to a serverless architecture
Additionally, there are real benefits to running apps on a single machine:
- Easier filesystem access (sqlite, rocksdb, etc for non-essential data)
- In-memory caching, which is faster and has fewer points of failure than redis/memcached/similar
- Single source of logs and metrics, no need to aggregate across multiple services
To give a concrete example of the caching point: our store’s product data refreshes every 60 seconds in a background task. It grabs everything from postgres, pre-computes things like minimum prices and collection memberships, and holds it all in memory behind a read-write lock so requests never block each other. That’s one struct doing the job of what used to be a Redis instance plus cache invalidation logic. No network hops, no serialization overhead, no “is the cache stale?” bugs. It just works, because there’s only one machine to worry about. As a result, the store is incredibly fast, even when there are browser and frontend cache misses.
With AI available to help make up for the loss of k8s and take advantage of the single-machine benefits, it was an easy decision.
Auto deployments
The biggest thing I missed from k8s was zero-downtime deployments. With k8s, you push a new image and it handles rolling updates for you. Without it, you need to build that yourself.
Our deploy system is a Rust CLI (cargo xtask deploy) that orchestrates the entire pipeline from my local machine (or a github action, if needed):
- Build a Docker image locally
- Transfer it to the server over SSH
- Run a blue-green deployment on the remote
The blue-green part is a bash script on the server. The idea is simple: you always have two “slots” (blue and green). One is live, one is idle. When you deploy, you spin up the new version in the idle slot, health-check it, then swap the reverse proxy to point at the new slot. If something goes wrong, the old slot is still sitting there untouched - rollback is just switching the proxy back.
Caddy handles the reverse proxy. It’s refreshingly simple compared to nginx. The config is a template with a
{{ACTIVE_UPSTREAM}} placeholder that gets replaced with either app-blue or app-green during deployment. After swapping the config, we
reload Caddy (which it handles gracefully - no dropped connections), wait for a 30-second drain period, then shut down the old slot.
The whole thing is roughly 400 lines of bash and a Rust wrapper that handles the SSH session, file transfer, and retry logic. It’s not as polished as a k8s rolling update, but it’s ours, it’s simple, and it works.
The blog you’re reading right now
I want to call out how the content system works on this site because it’s one of my favorite parts of the rebuild, and it’s a good example of how modern frontend tooling makes things easy that used to be complicated.
Every blog post and documentation page is a markdown file sitting in the git repo. This post, for example, is just a .md file in
src/routes/blog/_content/2026/02/. It has some YAML metadata at the top (title, date, category, tags) and the rest is standard markdown.
At build time, SvelteKit picks up all the markdown files using Vite’s import.meta.glob(), which statically embeds them into the app.
There’s no database, no CMS, no runtime file reading - the entire blog is baked into the build. A library called MDsveX compiles the markdown
to HTML, and gray-matter parses the metadata.
From there, everything else is derived automatically. Reading time? Count the words, divide by 200. Categories and tag counts? Aggregate the metadata. Related posts? Same category, different slug. The sidebar navigation for the docs section is generated from the directory structure itself - create a folder, drop a markdown file in it, and it shows up in the sidebar.
One small but satisfying detail: during development, editing a markdown file triggers a live reload in the browser. This doesn’t happen
automatically because import.meta.glob() runs at build time, so Vite doesn’t know to watch those files. A tiny custom Vite plugin
watches the content directories and sends a reload signal when anything changes. It’s maybe 15 lines of code, but it makes writing
content feel instant.
Compare this to the old setup where documentation was scattered across forum threads. Now it’s version-controlled, searchable, and organized. If someone finds a typo, it’s a one-line fix in a markdown file. If we need to restructure the docs, we move files around and the navigation updates itself.
Wrapping Up
The new site isn’t perfect. There are still pages that scream “AI generated this layout” and features I haven’t gotten to yet. But the foundation is solid. Adding a new page, a new blog post, or a new API endpoint is straightforward and that’s a huge win for us.
The real takeaway from this project, for me, is how much the development landscape has changed in just a couple of years. Decisions that used to be constrained by team size and expertise (Rust backend, custom deployment, file-based content) are now practical for a small team. AI doesn’t write everything for you, nor does it compensate for a lack of understanding, but it removes enough friction to make ambitious choices viable.
If you have thoughts on the new site, things that are broken, or features you want to see, come find us in Discord. We’re always listening.
Thanks for reading.