OP here. I built this repository to demonstrate a specific concept, which is that LLMs are a probabilistic tool that can be harnessed within a deterministic architecture to do much more than "vibe code". Think probabilistic core, deterministic shell.
The linked project, Terminal Value, is a sandbox with working e2e examples that demonstrate this. It takes in user context from a mock e-commerce site (crm notes, client and device data, etc.), then passes it to Gemini Batch API along with file context. The result is a fully functioning web view that is tailored to the user and dynamically served from the e-commerce site when they visit. This approach can also be used to generate a full user vertical of personalized multi-modal content to compliment this.
Click through to see images of results and more. This is meant to provoke thought and start discussion. Your feedback, criticism and contributions are encouraged!
Notes:
- Each web component takes about ~10k tokens to render with Gemini 3 Pro (via Batch API) in one-shot.
- The latest home page rendering prompt has a 100% functional success rate, though admittedly with limited sample size.
- There is a lengthy blog post, titled "Approaching LLMs Like An Engineer", embedded in the repo that describes the methodology behind this, along with bite-sized examples. It's linked at the top of README.
- I wrote all the code in the repository with one-shot prompts, using a similar methodology as the programmatic prompts. This gave me confidence that the method of invoking an LLM programmatically to render dynamic yet functional components would work well. My Gemini chats are linked within the blog post.
- All mock data generation or handling methods are pure. The result of any external side effects, like LLM API calls, are also hard coded in the repo, to make it easier for you or an LLM to summarize what the app does.
Long-time listener, first-time caller… hopefully I did this right, thanks for reading :)