Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Training takes a few months, but I bet that they'll do some testing first before releasing it to the public.

I also believe that they will delay the release of GPT-5 as much as possible, the reason being that it will be underwhelming (at least in comparison to GPT3.5 hype). Possibly release close to some Google new release timeline (their main competitor).

They are the main driver of a bubble that has benefited a lot both Microsoft and NVidia and other hyperscalers, and if they release the model and display that we're at the "diminished returns" phase, this will crash a big part of the industry, not to mention NVidia.

Companies are buying H100s and investing in expensive AI talent because they believe they progress quickly, if the progress stalls for LLMs, there'll be a huge drop in sales and CAPEX in this industry.

There are still many up-and-coming projects that rely on NVidia hardware for training, like Tesla's autopilot and others, but the bulk of the investment in H100 in recent years has been mostly because of LLMs.

Also all the new AI talent will move on to do something new and hopefully we will have more discoveries and potential uses, but we're definitely peak LLMs.

(ps: just my opinion)



Your opionion is based on the assumption that GPT-5 will be underwhelming. Do we have any hint to why you think so?


GPT-4 was underwhelming. Either they need more linear algebra tricks (or "AI") or incredibly better data, and neither seems to be the case.

I bet it'll be focused on being a better Siri for Apple. This is good for them as a business, but innovation-wise, it's pretty meh.

It'll still suck for factual or precise information, and its reasoning will still be -1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: