Why I Chose Boring
Building an events platform with the standard tools everybody loves to hate
Last Updated 2025-10-06
Getting my feet wet in full stack development has been an ... interesting journey. There's a world full of new and shiny frameworks that all claim to reduce friction and streamline development and deployment for the modern developer. My experience, however, has been anything but.
This post explains why it's better to choose the boring tools and "outdated" patterns that everybody loves to hate. These are the tools that shine in the context of AI assisted development; facilitating rapid prototyping through patterns the model can actually predict, and being terse enough to fit inside the average LLM context window.
Before we dive into it, I need to get the definitions out of the way...
Definitions
the stack /stæk/ noun, informal
A collection of tools chosen with optimism and maintained begrudgingly. If you're still using it at MVP, it's in the stack.
See also: resume-driven development, "I should rewrite this"
the stuck /stʌk/ noun, informal
Tools and patterns that seduce you during prototyping but get thrown out by MVP. Characterized by "one more fix" sessions, infrastructure rabbit holes, and the realization that you're deleting weeks of work.
See also: premature optimization, yak shaving
the boring /ˈbɔːrɪŋ/ noun, informal
Tools and patterns that get out of your way and let you ship. Characterized by extensive documentation, predictable behavior, and the comforting feeling that you're building features instead of debugging infrastructure.
See also: battle-tested, "it just works"
Introducing The Project
Convocare is an events platform for young adult Catholics in Melbourne. Users discover events through a map, calendar, or card grid, and can save favorites for later. Admins manage everything through a dashboard that handles event creation/editing, scheduling, and approvals.
The application is fully responsive across desktop and mobile, and includes features like recurring event support, image uploads, and location mapping. While building the site, I learned just how important choosing the right stack is in the context of AI assisted development.
"The Stuck": Convocare Edition
Version 0
My first attempt at getting Convocare up was a mess. Although the site was functionally working, it was tied together with spaghetti, and used a backend so tightly coupled to the architecture that a full rewrite became the easiest option...
Let's take a look at the 3 main issues with my first attempt.
LLMs And DRY; A Love Story
In the AI assisted development discourse I see online these days, there is one crucial detail I think is being overlooked:
DRY violations are self perpetuating when developing with AI.
It's easy to be enticed by the quick prototyping that occurs working with AI, however, each DRY violation you accept will ironically create a pattern which the LLM will use to violate DRY again. This emergent behaviour is unique to LLMs, as humans can always just choose to write the next feature more cleanly, but with agentic AI agents contextualising the patterns of the entire codebase, each mistake will self perpetuate.
In my case, I slipped into bad habits when creating this prototype, allowing repeated code to be littered throughout the codebase, rather than creating clean abstractions. When I tried to steer the AI into a cleaner way of doing things, I found it was already too late, as it was clear the model was recognising the repetition as its own pattern that should be followed.
This is garbage in, garbage out for the agentic era, and should be avoided for anything bigger than a toy project.
Svelte
My first iteration of Convocare was built using Svelte and SvelteKit using Claude Sonnet 3.5/4. This was a big mistake. I think the lack of quality training data that exists for new and clever frameworks and patterns, like those Svelte offers, means that the code that LLMs will write will be poorer.
The code that it wrote was again, functional, but extremely verbose. I couldn't get the model to stick to a single pattern, and even when it did, they weren't idiomatic to Svelte.
I should've just used React from the start as it has tonnes of great training data to use for all kinds of problems.
Monolithic Architecture
This one's on me... I was enticed by the idea of simplifying the infrastructure to a single box, which is great for cost savings for a specific set of projects, but certainly overkill for Convocare. Still, I forged ahead, until I got bogged down in infrastructure hell. I spent more time debugging why an Nginx reverse proxy was grabbing its configuration from an unexpected file than implementing features that users would actually want.
Even worse, I actually considered hosting my own map tiles instead of just defaulting to using Mapbox (how likely is it I'll ever leave the free tier?).
AI thrives when you can isolate issues to a small area. A monolithic architecture doing 10 things at once is the nightmare scenario for an AI agent, it will struggle to pinpoint where something has broken. Add to that the fact that agentic tools can't SSH into remote servers to debug config issues (probably a good thing, security wise), and you're back to manual debugging.
Version 1
Coming back to the project with a fresh set of eyes I was able to recognise the mess I had gotten myself into. However, this wasn't to say I learned my lesson, yet again I was tempted by technologies and approaches that didn't lend themselves to AI assisted development, and got bogged down in technical debt and problems quickly!
Rolling My Own API
Realising the mistake of a monolithic architecture, I think I swung the pendulum too far the other way. I decided to roll my own REST API to maximise deployment flexibility; using a combination of Drizzle, Zod, and Better Auth (all excellent tools in the right context).
I got about 80% of the way through writing the API routes before I realised the code was unmaintainable. The issue here was consistency; each route generated had slightly different patterns for validation, error handling, auth, etc.
In hindsight, I think this had a lot to do with the context window. The conversation would compact before finishing a single route, meaning Claude would lose context of earlier patterns. Unless I manually fed it examples from previous routes (and I rarely did), each new endpoint reinvented the wheel; not thinking to look to see if any other routes had been created to reference in the first place.
Supabase CLI And Declarative Schema
Supabase is legitimately incredible, but for now, Claude can't leverage it effectively. I had two major issues when trying to get Claude to work with Supabase; use of the CLI, and working with migrations and declarative schema.
Normally, my go to solution to fixing Claude having issues with CLI tools, is to either find the relevant documentation and provide it as a file to be read, or to just paste in the entire documentation in the chat window. These approaches have worked for every tool I have had issues with, up until working with the Supabase CLI.
Claude would continually provide incorrect information on how to use the CLI, edit migration files that had already been run on the database, and continually misinterpret how to work with declarative schema. Despite giving Claude the context it needed to solve the problem, it was still unable to find a solution.
There are two reasons I think this occurred:
- The Supabase CLI is a local first tool, but the AI kept treating it like it was applying changes on the remote database.
- The declarative schema approach that I was excited to use (coming from Terraform), isn't feature complete. The tooling encourages you to think in terms of migrations, but the declarative layer creates confusion about what's already applied vs what needs to change.
In the end, I ended up doing most Supabase schema work manually through the dashboard and SQL editor. Not ideal, but not the end of the world.
"The Boring": Convocare Edition
After two failed attempts, here's the stack that actually got built:
- Frontend
- Next.js, React, TypeScript
- Tailwind CSS
- Vercel (hosting & deployment)
- Backend/Data:
- Supabase (Postgres + Auth + Storage)
- Google OAuth (via Supabase)
- Next.js API routes (middleware layer)
- Sharp (image processing)
- External Services:
- Mapbox (map tiles)
- Route 53 (DNS)
- Validation & Type Safety:
- TypeScript (compile-time checks)
- Zod (runtime validation)
- Helpful AI Tools:
- v0 (frontend component designer)
- Claude Code (Agentic Tool)
- Sonnet 4.5 (Model)
Benefits Of The Boring
"The Developer Experience"
Before working on Convocare, I thought "the developer experience" was only ever used as the butt of a joke for frontend devs who were scared of touching hardware. I'm ashamed to report, however, that the developer experience is REAL.
The Next.js w/ Vercel and Supabase stack is incredible. The following three commands essentially allowed me to forget about infrastructure and just work on implementing features:
vercel --prod
npx supabase db push
npx tsx supabase-seed.tsBoth Vercel and Supabase have sane defaults, hard billing limits, and reasonable free tiers, which really give peace of mind for small projects. Moreover, both are flexible in terms of deployment, so in the unlikely case Convocare gets to a point where cost becomes a real issue, we have options with where to run things without too much rigmarole.
In particular, using Supabase's RLS, as well as utilising views rather than joins for public queries means it's easier to separate concerns, and creates a mental model that goes "Database stuff? That's handled by the database".
LLM Friendly API Routes
Simplifying your API routes allows you to leverage AI debugging far more effectively, thanks to the unidirectional and linear patterns that AI seems to thrive on.
In my case, I used URL parameters over state management in React (think
?filter=upcoming&sort=date). This allowed issues to be quickly identified, as
the LLM doesn't need to contextualise several processes at once, and doesn't
need to mock in order to start debugging.
In short:
- A route an AI can debug easily: fetch, validate, display
- AI's nightmare route: query, cache, invalidate, retry, error, retry.
Context Is King
Boring tools are terse and clear. The more tersely the information you give the AI that is still intelligible, the more context you have left for actual solutions to the problem you're working on.
A great example of this is Tailwind, which plays extremely well with Sonnet. By managing a lot of the CSS in the context of your HTML, LLMs don't need to manage file references and state throughout your codebase, so that the changes it makes actually work.
Conclusion
Let's have another look at those definitions. In short:
- the stack: tools used to reach MVP
- the stuck: tools you ditch by MVP
- the boring: tools that make reaching MVP a breeze
By committing to the boring, you'll be in a much better position to leverage AI assisted development to deliver a functional platform in a fraction of the time it would take you if you tried to chase new and shiny, like I first did...