For sure. This is a straight copy-paste of my prompt which references my architecture, codebase, references folder (God, that's so golden -- .gitignored, but saving the tokens of googling or cloning and just having the codebases we depend on locally is killer) so this is not ready to be copy pasted. However, with the context that this is an interface for a peer-to-peer encrypted radio mesh client (for MeshCore; my code is https://github.com/jkingsman/Remote-Terminal-for-MeshCore), that can maybe give you a mote of context around things that are obviously key (e.g. sending/receiving) or important but not a topline acceptance criteria (message ordering, async radio operations, etc.) to port this and try it out on your codebase.
Also, I say "prior to public release" and obviously, this codebase is super publicly released, but that doesn't matter -- what I'm doing in the prompt is priming my agents for a this matters tone. I have no opinions I'd state publicly on the consciousness argument, but I generally dislike deception; in this case, I find declaring this to be our last ditch review before public release puts it in a context state to be more details-oriented than other primers I tried ("this is our final release" led to too many interoperability/future-proof finds; "this codebase has subtle bugs introduced by an adversarial engineer" had too many security false positives or true-but-absurd positives; "please be detailed and carefully dig deep" just wasn't as good. Plus, the "public release" paradigm helped it to do innate classification of "gotta fix before release" vs. "fix soon after" vs. "fix when able" which maps pretty well to my personal taste in the severity of bugs it's found I've evaluated, so I've kept it).
Thank you for sharing this! I love how writing prompts like this forces us to clarify our own values and sense of engineering taste. I’m really curious to see what an agent would find in our code base with this.
You’re missing the group of high performers who love coding, who just want to bring more stuff in the world than their limited human brains have the energy or time to build.
I love coding. I taught myself from a book (no internet yet) when I was 10, and haven’t stopped for 30 years. Turned down becoming a manager several times. I loved it so much that I went through an existential crisis in February as I had to let go of that part of my identity. I seriously thought about quitting.
But for years, it has been so frustrating that the time it took me to imagine roughly how to build something (10-30 minutes depending on complexity) was always dwarfed by the amount of time it took to grind it out (days or sometimes weeks). That’s no longer true, and that’s incredibly freeing.
So the game now is to learn to use this stuff in a way that I enjoy, while going faster and maintaining quality where it matters. There are some gray beards out there who I trust who say it’s possible, so I’m gonna try.
Good point and I’m exactly at the same point as you with this. Working on letting go of the idea (and to be honest just the habit) that it’s somehow ‘cheating’ at the moment.
Not a troll. I’ve been doing a lot of self reflection on this topic lately. Some people seem to enjoy software for the act & craft, where the outcome / artifact is secondary or irrelevant. I don’t. Some people enjoy the artifacts it produces, for their utility or economic value. Not really me either. Often people frame it as this dichotomy, but I’ve realized my enjoyment and self-fulfillment comes from creating an artifact that is genuinely good and that I can be proud of creating. Too much AI robs me of this. I’ve created cool stuff with AI that leaves me feeling nothing because I didn’t really create it.
This is all valid. Your original comment came across as a troll because it implied that nobody could ever feel good about stuff they built with AI. Asserting that you know more about the emotional state of strangers on the internet than they know themselves is arrogant.
Well, it’s a genuine question. Like, if I have a machine in my house where I give it a recipe and it spits out the food, should I feel good about having “cooked” that food? Or what if someone prompts an AI for some art, should they feel proud of “creating” that art? I think not. And it’s the same with code. Depending on how much of the work you actually did should influence how you talk and feel about a creation. So many people lazily prompt an AI and then come here to post about something they “made” and I think that’s wrong.
I’m thinking there’s probably degrees to it. Like there is some stuff I absolutely want to hand craft, but then other stuff I don’t mind so much.
One of the interesting discussions at work (I’m in gamedev) has been about tooling and where AI fits in there.
Previously you’d spend sometimes significant time writing a tool, then polishing it up and giving it to the team (think things like editor extensions that make your workflow easier).
But AI can make this kind of bespoke tool dev so cheap now that it’s possible for every single dev to have their own tool that matches the way they work exactly. At that point, do you really need to spend the long 80% effort of polishing and getting it ready for mass consumption?
Stuff like that is interesting. I still can’t imagine never looking at the AI-generated code, but I’ve seen people take the approach of “I’m not interested in the code, only in what the thing does. If it’s wrong, I ask the agent to fix it”.
Yes I'm exactly like you as well. I've been coding for 30+ years, I still love coding and system building etc, but sometimes the level of frustration to find the information and then get something working is simply too high.
Over a weekend, I used ChatGPT to set up Prometheus and Grafana and added node exporters to everything I could think of. I even told ChatGPT to create NOC-style dashboards for me, given the metrics I gave it. This is something that would have painstakingly take several weeks if not more to figure out, and it's something I've been wanting to do but the cognitive load and anticipatory frustration was too high for me to start. I love how it enables me to just do things.
My next step is to integrate some programs that I wrote that I still use every day to collect data and then show it on the dashboards as well.
On a side note, I don't know why Grafana hasn't more deeply integrated with AI. Having to sift through all the ridiculous metrics that different node exporters advertise with no hint of naming convention makes using Grafana so much harder. I cut and pasted all the metrics and dumped it into ChatGPT and told it to make the panels I wanted (ex. "Give me a dashboard that shows the status of all my servers" and it's able to pick and choose the correct metrics across my Windows server, Macbooks and studio, my Linux machines, etc), but Grafana should have this integrated themselves directly into themselves.
100% agree. Velocity at level 8 or even 7 is a whole order of magnitude faster than even level 5. Like you said, identifying the core and letting everything else move fast is most of the game. The other part is finding ways to up the level at which you’re building the core, which is a harder problem.
Disagree, I don't particularly want to up the level at which I'm building the core. Core is where I want to prioritize quality over speed, and (at least with today's models) what I build by hand is much, much higher quality.
I’ve had a couple wins with AI in the design phase, where it helped me reach a conclusion that would’ve taken days of exploration, if I ever got there. Both were very long conversations explicitly about design with lots of back and forth, like whiteboarding. Both involved SQL in ClickHouse, which I’m ok but not amazing at — for example I often write queries with window functions, but my mental model of GROUP BY is still incomplete.
In one of the cases, I was searching for a way to extract a bunch of code that 5-6 queries had in common. Whatever this thing was, its parameters would have to include an array/tuple of IDs, and a parameter that would alter the table being selected from, neither of which is allowed in a clickhouse parameterized view. I could write a normal view for this, but performance would’ve been atrocious given ClickHouse’s ok-but-not-great query optimizer.
I asked AI for alternatives, and to discuss the pros and cons of each. I brought up specific scenarios and asked it how it thought the code would work. I asked it to bring what it knew about SQL’s relational algebra to find the an elegant solution.
It finally suggested a template (we’re using Go) to include another sql file, where the parameter is a _named relation_. It can be a CTE or a table, but it doesn’t matter as long as it has the right columns. Aside from poor tooling that doesn’t find things like typos, it’s been a huge win, much better than the duplication. And we have lots of tests that run against the real database to catch those typos.
Maybe this kind of thing exists out there already (if it does, tell me!) but I probably wouldn’t have found it.
I was just trying to find a blog post that I read years ago where someone wrote about storing their furniture at IKEA. Couldn't find the post, but the idea helped me downsize during a recent move.
reply