Hacker Newsnew | past | comments | ask | show | jobs | submit | mrkiouak's commentslogin

A site I put together while volunteering for my Vermont town, made heavy, heavy use of Claude Code (and occasional Gemini (both UI & Vertex for initial Annual Town Report summaries & highlights).

Really pleased with how much time LLMs saved me. Followed a typical "Comparative Budget info is usually between page 14-40, please parse budget info out into this JSON structure", I eyeball review, tweak where issues, repeat to get raw data.

Then site was super, super quick to get setup and live (deployed via Cloud Run) (literally less than an hour). Then a couple hours over a few days to add content, restructure etc.

Still more of a rough draft, but this is absolutely 100% not something I'd have been able to do or remotely have considered doing without LLMs + past 1.5 years of improvements.

N.B. Vermont State Statutes have unique rules about how municipalities pass budgets, theres an annual town hall day. See e.g. https://www.vermontpublic.org/local-news/2025-02-28/vermont-... and https://en.wikipedia.org/wiki/Town_meeting#Vermont


Yeah, not only are people doing this, but this is possibly one of the most common problems int he real world with real people. The blog post may have some helpful suggestions, but these descriptions seem to signal a really large "human people understanding" blind spot. The author's circle of friends may all be high EQ, well adjusted people, but this just isn't representative of the real world. (Which its fine to ignore, but don't pretend thats not the case!)


As someone who worked as a software engineer at Google on a service that heavily depended on FFmpeg, its absurd that Google posts security bugs (which have the obvious potential outcome of driving more free work) vs just paying an engineer to fix the bug.

I promise they are spending more on extra compute for resiliency and redundancy for FFMPEG issues than it would cost for a single SWE to just write a fix and then shepherd through the FFmpeg approval process.


As someone who was on a project that stalled for a year because our patchset wasn't accepted by a different open source project (not Linux either), I can tell you from experience that it's not as easy as folks here make it out to be. Some maintainers (and Googlers) really want you to study at their mountaintop monastery before your code is worthy, and scrutiny is even higher now due to AI, as we can see from the complaints about this bug report. Now, I've merged enough open source patches on my personal time to know that most projects aren't like that, but based on this interaction, I seriously wonder if Google's patch would've been accepted without incident.

Maybe AmaGoogSoft deserves this, but then what's the threshold? If I'm in charge of Zoom or Discord and one of my engineers finds a bug, should I let them report it and risk a public blow-up? Or does my company's revenue need to be below $1B? $100M? This just poisons the well for everyone.


Bonus comment: I was present for conversations about how Google should just write an internal version because of all the stability issues, but that that work would never get prioritized or be considered valuable because it wouldn't get anyone promoted (to be fair, given how widely FFmpeg is used, it would have gotten an L4 or L5 promoted, but it would have been a near sisyphean task over years to get to the point where you could demostrate the ridiculously high XXm-XXXm returns that would come from just helping to improve FFmpeg).


The key thing I'm confident in is that 2-3 years from now there's going to be a model(s) and workflow that has comparable accuracy, perhaps noticeable (but tolerable) higher latency that can be run locally. There's just no reason to believe this isn't achievable.

Hard to understand how this won't make all of the solutions for existing use cases commodity. I'm sure 2-3 years from now there'll be stuff that seems like magic to us now -- but it will be more-meta, more "here's a hypothesis of a strategically valuable outcome and heres a solution (with market research and user testing done".

I think current performance and leading models will turn out to have been terrible indicators for future market leader (and my money will remain on the incumbents with the largest cash reserves (namely Google) that have invested in fundamental research and scaling).


I have been around a little while, so I reach for what I think of as boring tech for hosting my own stuff.

I'd love to hear about 1) the bulletproof stuff people use for their own stack, 2) the "cool new thing" that you've found makes life a lot easier that a semi-oldie like me may not know about.


These days I'm a little afraid to do anything more complex than buckets/ETL serverside.


I would love to see comments for other fun/entertainment oriented sites using GenAI people have seen. I think I've read a lot from what the big Foundation Model Co's are developing, but really interested to hear what more "indie" folks are doing. https://blog.google/technology/google-deepmind/ancestra-behi... is interesting, curious to see more.


Uses the gemini flash 2.5 preview version from today, but still uses Imagen 3 for image generation (I was getting a "publisher endpoint does not exist" when trying to use the Imagen 4 preview model name).

Requires you go through a sign up process, but the data just goes to me, I don't use the email for anything other than confirming the email can be read for the sign up link, and then rate limiting the account (5/requests per day).

The story generation prompt still needs some improvement, but had some success in getting the generator to follow more of a "Hero's Journey" type structure, so it prompts moving the narrative forward -- just need to tighten up it resolving conflict if the user doesn't.

I think its interesting seeing what works well/gets generated well, vs whats wonky (images can be great, but the model also seems to get confused and start doing fairly unnatural things, even when its getting fairly tame, normal input)


Talks about a problem I hadn't foreseen when I started working on the experiment. Curious if theres other blogs or research that address the "stay consistent across different mediums where it isn't feasible to keep everything in context (and where separate models are involved)".


I screwed up an edit, which generated a new post id and updated the url in my hastily thrown together blog code (note to self: make the url path a stable string based on the article content :)). Updated url: https://musings-mr.net/post/WZFBlctl9mzKSPaNvRiy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: