Hacker Newsnew | past | comments | ask | show | jobs | submit | nicklo's commentslogin

the animation of the model name text when opening the detail view is so smooth and delightful


Congrats on launch! As the agent cli’s and sdk’s were built for local use, there’s a ton of this infra work to run these agents in production. Genuinely excited for this space to mature.

I have been building an OSS self-hostable agent infra suite at https://ash-cloud.ai

Happy to trade notes sometime!


Yeah with sandbox pre-warming and disk co-location its fast enough to avoid UX cold start penalty.

On write amplification — we persist at the message level, not per SSE chunk. The sandbox's workspace filesystem (claude code's native jsonl files) is the source of truth for resume, and the DB is for queryability, tracing, etc - so fire and forget works fine here.


I’m building a self-hostable, open source agent sandbox orchestrator here: https://github.com/ash-ai-org/ash-ai


directionally correct but important to note the water wasted by sustaining the insufferable human is much higher than producing the tokens


i've always wondered (for this, portkey, etc) - why not have a parallel option that fires an extra request instead of mitm the llm call?


You can fire them in parallel for simple cases. The issue is when you have multi-agent setups. If context isn't persisted before a sub-agent reads it, you get stale state. Single source of truth matters when agents are reading and writing to the same context.

For single-agent flows, parallel works fine.


I think it’s the reverse - people were too lazy to read the docs so nobody was motivated to write them.

With an agent I know if I write once to CLAUDE.md and it will be read by 1000’s of agents in a week.


I like this insight. We kind of always knew that we wanted good docs, but they're demotivating to maintain if people aren't reading them. LLMs by their nature won't be onboarded to the codebase with meetings and conversations, so if we want them to have a proper onboarding then we're forced to be less lazy with our docs, and we get the validation of knowing they're being used.


Have you considered making an MCP for this? Would be great for use in vibe-coding


The bitter lesson strikes again… now for graphics rendering. Nerfs had a ray tracing prior, and Gaussian splats had some raster prior. This just… throws it all away. No priors, no domain knowledge, just data and attention. This is the way.


OP: please don't poison your MIT license w/ surya's GPL license


It should be possible to call a GPL library in a separate process (surya can batch process from the CLI) and avoid GPL - ocrmypdf does this with ghostscript.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: