Hacker Newsnew | past | comments | ask | show | jobs | submit | ani17's commentslogin

Author here. A bit more context: By day I'm a systems engineer building AI networking infrastructure. So I kept ending up in conversations where I'm not exactly able to wrap my head on the latest inference magic trick.

Like when someone mentioned vLLM's paged attention, I knew virtual memory paging, but had no idea someone had applied the same idea to KV cache allocation on GPUs.

Github link to the project: https://github.com/Anirudh171202/WhiteLotus


The blog walks through why your first token is always the slowest, why output tokens cost 5x more, and how stuff like speculative decoding and chunked prefill actually work, from the perspective of a systems engineer!

Definitely an alternative solution. For the purpose of this script, I wouldn't prefer that though.


It's insane if the data is accurate. Only time will tell


You forgot "Middle Out" by Pied Piper!


thanks for sharing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: