Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A layer is a transformer block / layer (basically the building block of the modern LLM architectures) - maybe Gemini can help you:

https://gemini.google.com/share/cc58a7c6089e



I am perfectly aware of that. I don't believe other LLMs have such embeddings per layer, only the usual weights, so these per-layer embeddings seem to be distinguished from weights in some way. Afaik trying to play the same "cache in fast storage and load on demand" wouldn't work with layer weights since you'd end up with too much back/forth (you'd touch every cached byte on each token, assuming no MoE), so I'm guessing these embeddings are structured in a way that's broken up by concept.


lmao




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: