Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?
5 points by kart23 4 days ago | hide | past | favorite | 10 comments
This is something I don't understand, how will all these AI-as-a-service companies survive in the future when hardware gets better and people are able to run LLMs locally? Of course right now the rent vs. buy equation is heavily tilted towards rent, but eventually I could see people buying a desktop they keep at home, and having all their personal inference running on that one machine. Or even having inference pools to distribute load among many people.

do you think what this is possible, and what are these companies plans in that event?

 help



Well most people aren’t going to no more buy computers to run at home than they do run their own Plex media servers. Besides that, most people’s primary computer is a phone.

On the business side, to a first approximation, no one is running their own computers in their office. They are using either a Colo or cloud service.

Speaking of which, AI that they access through an API key is not a product that most people buy. They buy products that use AI - like ChatGPT. Speaking of which, open AI has no illusions about becoming profitable based on $20/month subscriptions or even lesser so advertising.

The money that AI companies make from selling API access directly (except maybe Anthropic via Claude Code) pales in comparison to what they make selling through cloud providers who then sell to businesses.

> I could see people buying a desktop they keep at home, and having all their personal inference running on that one machine. Or even having inference pools to distribute load among many people

Yes I’m sure my 80 year old mother who uses ChatGPT is going to get together with her sisters and buy computers that they can network together over their 30 Mbps uplink cable modem…

This is so much not how normal people operate.


PC's are 1000x faster than they were 20 years ago yet cloud services still makes a relatively larger share of Microsoft's revenue each year.

Your premise makes sense if the benefits of an AI model topped out at something that a person's personal computer could run. However scaling laws seem to have no limit yet (perhaps due to the general nature of intelligence itself not having a "limit"), thus the labs will still have a significant advantage due to scale and hosting models with a distinct comparative advantage to even the best local models.


They won't survive. AI-as-a-service for frontier models will be relegated to military and research – if that. We're already at diminishing returns on model improvements. Latest improvements are on surrounding architecture, harnesses, agent systems, etc. Consumer hardware will be running the equivalent of ChatGPT 5.2 and IMO most interaction with personal computing devices will be done via natural language LLM personal assistants.

Maybe it takes a bit longer than 5 years but that's where we're going. Already the only reason you're not interacting via personal assistant for everything isn't really LLM capability but the lack of tooling.


Agree 100% with what you’re saying about capabilities, but want to push back on the conclusion that these Trillion dollar companies will let themselves just go away

The are already moving into enterprise, Gov, and product software. I think they’ll find a way to make money even if access to their models is no longer a huge moat


The frontier of how good models are also shifts and will remain ahead of local models unless we hit some dead end limitation in the algorithms themselves. A ceiling so to speak on how good LLM can get before the law of diminishing returns starts to apply.

i don’t understand. all models are local models, they’re just not running on your machine.

1. Is it cheaper for me to buy hardware and electricity than to call an API? (doesn't seem like it right now)

2. The best models are still worth it, unclear when this changes

3. Average person doesn't have the skill to do this. They are afraid to run even simpler things


definitely not right now. but I believe sometime the progress of models will plateau while hardware continues to get better. and maybe it would be cheaper, especially if you have solar.

3. this is like saying the average person doesn’t have the skill to run gta over wine on their linux box. gaming consoles exist.


Young people have had even the concepts of filesystems conditioned out for files to live in a 'folder' of an APP.

Local sovereignty isn't a pressing need for most users.


I do believe this is gonna get commodatized like the internet has. Hardware obviously keeps getting better and cheaper as the time goes by. Sofware in this case is already free/open-weights.

The moats these companies might end up having in near future:

1. Government and enterprise contracts;

2. Even better private models not released to public and only accessible through long-term/exclusive contracts;

3. Gatekeeping the access to millions of their users, especially the non-technical ones, and charging premium for the same;

4. Becoming more and more as the full-stack OS'es to build on top of them.. By proving ready-made foundational layers like knowledge, memory, search/research, sandboxes, deployments, etc...

5. Data/network effects from large-scale usage and feedback loops.

...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: