Hacker Newsnew | past | comments | ask | show | jobs | submit | alansaber's commentslogin

Because writing is really hard.

Interested if these TUI agent systems have any unique features. They all seem to be shipping the standard agent swarm/bg agent approach.

"This guy is coding everything in the terminal, he must be really good!"

Not terrible if they proactively depricate slop features

PhD students are levy infantry at best with Postdocs being the armoured levies.

Is this Gondor or Mordor?

I find the mistral "middle" between small LMs /1T LMs compelling. Models that are sufficiently big to be performant but specialised for domains and tasks- this is what I assumed we'd always head towards.

It's a balancing act no? Generally you certainly want to optimise to minimise unhappiness but not to the point of avoiding conflict/difficulty.

Reminds me of Chade and The Skill from the Robin Hobb books

Kid knows how to advertise

Yes to three-letter agencies.

Indeed for once data volume >>> other concerns

Not sure I agree with this. MD files need to be constantly synced to code state- why not just grep the code files? This is just more unstructured indexing

yeah my teammates seem to enjoy checking in endless walls of MD texts of "documentation" generated by llms after it's done adding a feature. So even if that's an extreme and your documentation is more thoughtful, there is still a problem of:

* redundancy with the code: if code samples can be generated from the code, why bother duplicating them? what do they add? can they not be llm-generated later? and possibly kept somewhere out of the way (like, a website) so as not to clutter the codebase with redundancy

* if you do go for this duplication, then you are on the hook for ensuring it's always up-to-date otherwise it becomes worse than duplicate: misleading

So my preference is, when adding something to the repo, think very hard whether this information is redundant or not. Handcrafted docs, notes, comments that add more context like why was this built that way after a ton of deliberation - yes. Anything that is trivially derived from the code itself - no.


I've been trying to push people to use hitchstory or similar to generate docs from specification tests precisely to avoid that redundancy but most people just look blankly at it and go "why don't you just do that with AI?"

The code doesn't always say "why".

Grepping works when you wrote the code. Not so much when someone else installs your package and has no idea which export is public API. We added a one-page markdown saying "use these, ignore the rest" and the wrong-import issues mostly stopped.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: