Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On Hugging face I see 4B and 2B versions now -

https://huggingface.co/collections/google/gemma-3n-preview-6...

Gemma 3n Preview

google/gemma-3n-E4B-it-litert-preview

google/gemma-3n-E2B-it-litert-preview

Interesting, hope it comes on LMStudio as MLX or GGUF. Sparse and or MoE models make a difference when running on localhost. MoE Qwen3-30B-A3B most recent game changer for me. Activating only 3b weights on the gpu cores of sparse Qwen3-30B-A3B, rather than comparable ~30b of dense models (Qwen3-32B, Gemma3-27b, GLM-{4,Z1}-32B, older QwQ-32B), is a huge speedup for me: MoE A3B achieves 20-60 tps on my oldish M2 in LMStudio, versus only 4-5 tps for the dense models.

Looking forward to trying gemma-3n. Kudos to Google for open sourcing their Gemmas. Would not have predicted that the lab with "open" in the name has yet to release even v1 (atm at 0; disregarding gpt-2), while other labs, more commercial labs, are are at versions 3, 4 etc already.



It's a matter of time before we get a limited activation model for mobile - the main constraint is the raw model size, more than the memory usage. A 4B-A1B should be considerably faster on mobile though, for an equivalent size model (~4Gb).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: