llms like starting over

2025-06-07

I used v0 to stand up a bespoke photo uploading site for the guests of my friend’s wedding.

After 17 messages, I had deployed a full-stack app that matched the aesthetic and colors of the wedding. Guests could easily upload photos to @vercel/blob, and my friend could see and download everything.

I just got back from the wedding, and it all went off without a hitch. What a time to be alive!

But in higher-stakes apps, I find myself disagreeing with LLMs often – mostly when their contributions seem to miss the theory of the system. Saying the right incantations helps, but it’s still a daily ritual of explaining the gaps in its demonstrated understanding of the system to a model.

I work closely with a very successful vibe coder. Someone who ships with insane feature velocity. The code is a bit ad-hoc and feature implementation are more disparate than I’d choose, but it works.

I realized that this is actually the key to success in coding with LLMs…


Context windows, statelessness, and token limits conspire against LLMs’ effectiveness as principal engineers of high-stakes and complex systems.

But if you make each feature its own independent, narrow “vertical slice” of the system, you can lower the stakes and make LLMs more effective.

A highly successful document-based app I work on is built this way. The chat feature is backed by an API that doesn’t even connect to the database. It’s the client’s job to provide context, user data, etc.

Each feature is separable, tightly vertically integrated, and integrates with the other features via one of two common identifiers: user IDs or document IDs.

This effectively means that the “theory of the system” is about the common identifiers and the types they identify, rather than a more delicate composition of features.

Coding agents are free to start over with each new feature. They just need to know about a tiny set of IDs and types.

© 2025 Self Referential LLC