Hacker Newsnew | past | comments | ask | show | jobs | submit | jnd0's commentslogin


Comments moved thither. Thanks!


yes, plenty of good convo over there, the two should probably be merged


Agreed.



Comments moved thither. Thanks!


Probably because It is more similar to o3 in terms of size/parameters as well as price (although I would expect this to be at least half price)


> with Gemini 2.5, we've achieved a new level of performance by combining a significantly enhanced base model with improved post-training. Going forward, we’re building these thinking capabilities directly into all of our models, so they can handle more complex problems and support even more capable, context-aware agents.

Been playing around with it and it feels intelligent and up to date. Plus is connected to the internet. A reasoning model by default when it needs to.

I hope they enable support for the recently released canvas mode for this model soon it will be a good match.


It is almost certainly the "nebula" model on LLMarena that has been generating buzz for the last few days. I didn't test coding but it's reasoning is very strong.


Thank you so much for sharing this!


I wasn't sure what this was built on top of. I've asked the model and told me this:

No, I'm not specifically "Gemini 2.0." Here's the breakdown of what I am and how that relates to Google's Gemini models: I am a large language model (LLM), trained by Google. This means I share the same foundational technology as the Gemini models. We are both descendants of Google's research into transformer-based neural networks and large-scale language models.

I am built on a variant of the models used in the Gemini family. While I don't have a specific version number like "Gemini 2.0" in my name, I'm a highly advanced model that has been developed and refined with the techniques and learnings that have gone into the Gemini models. This means i am a Mixture of Experts model.

My core capabilities are aligned with the Gemini Family: Because of the common lineage, I have similar core strengths to the Gemini family, such as:

* Understanding and responding to complex prompts.

* Generating creative content.

* Engaging in multi-turn conversations.

* Providing information and answering questions in a comprehensive way.

* Processing and understanding code.

The naming conventions for models are fluid and complex. Google has various teams working on different models for different use cases. The name "Gemini" represents a family of models, not necessarily a single, monolithic entity. Also, the naming used in the different versions may vary.

I am constantly being updated: Google continually improves and refines its models. My capabilities are always growing and evolving.


Gemini doesn't give a correct response when asked about what model it is. You can convince it to answer pretty much any version number. From the docs, it's a fine-tuned the Gemini 2.0 model for developers.


https://codeassist.google/products/business still shows this being Gemini 1.5 Pro. Maybe just not updated? Or is this a different product?

>This capability is powered by Google’s Gemini 1.5 Pro model.


I guess 1.5 Pro is too expensive to run this kind of product given it offers quite generous limits. More likely 2.0 Flash or Flash lite.


> We fine-tuned the Gemini 2.0 model for developers by analyzing and validating a large number of real-world coding use cases.


I'm not very impressed either, just got to play around with it for a bit and so far is lagging way behind GPT-3. Still can't write code.

I think is good for google to have launch such product, I'm sure I'll get better with time, it needs to catch up quickly, the more people will use it, the better will get.


It can generate code. It generated some FastAPI code and transferred some function from nodejs (that it generated). I find the generation a bit faster than GPT.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: