> No, the crypto mining industry collapsed because the largest GPU-mined cryptocurrency (Ethereum) removed the ability for people to mine it with GPUs. Crypto obviously does not "need parallel processing" in the way that Nvidia GPUs provide it, because Ethereum removed that requirement entirely.
Absolutely correct.
At least cryptocurrencies like Ethereum found an alternative consensus method that is efficient and environmentally friendly, unlike what I am seeing in AI and deep neural networks for decades which that still has not changed with the way on how models continue to get larger and require more planet incinerating power and will currently require millions and in the future, billions of dollars to train. As long as they stay inefficient, Nvidia will continue smiling knowing that.
The end result is a pseduo-intelligent black box that confidently hallucinates and can get confused by a single bad input - rendering it useless. Once it all goes wrong, then it gets re-trained / fine-tuned again, wasting more energy. Inference requiring tons of GPUs is also wasteful and inefficient.
The moment we switch to better and efficient alternative in training deep neural networks at scale, that I would call a 'breakthrough' in AI rather than throwing more data, more GPUs at the problem.
> unlike what I am seeing in AI and deep neural networks for decades which that still has not changed with the way on how models continue to get larger and require more planet incinerating power and will currently require millions and in the future, billions. As long as they stay inefficient, Nvidia will continue smiling knowing that.
What are you talking about? The inaccessibility of the models and the heavy gatekeeping by the big players is leading to more and more open source models that can run on an average gaming GPU or a Mac M1. It seems like every day on HN there is another post on another super-optimized version of something that was once only accessible to those with huge server farms or cloud budgets. You're seeing announcements like Stanford's Alpaca that specifically mention how it can run on low-budget commodity hardware or a few hundred dollars in the cloud.
I'd say efficiency and optimization is one of the hottest areas in large language models right now, although the motivation is less for climate reasons and more for 'I want to do this on my own hardware and have no safety censors' reasons.
I'm convinced that with the efforts you speak of we're seeing history repeat itself (as it often does).
In the late 90s/early 2000s when the internet/web was really picking up steam powerful and entrenched players like Microsoft were throwing their weight around and trying to dominate the space like they did personal computers. Ditto Sun, etc.
Then the open source community got together and ate their lunch. Who remembers Netcraft statistics for Windows + IIS vs Linux + Apache?
OpenAI, Meta, Google, to some extent Nvidia, etc are doing their best to go the closed/commercial/proprietary/gated/costly route (again) - and open source is already hitting back hard against all of these and as has been shown before, starting to catch-up/outpace them at incredible speed.
Absolutely correct.
At least cryptocurrencies like Ethereum found an alternative consensus method that is efficient and environmentally friendly, unlike what I am seeing in AI and deep neural networks for decades which that still has not changed with the way on how models continue to get larger and require more planet incinerating power and will currently require millions and in the future, billions of dollars to train. As long as they stay inefficient, Nvidia will continue smiling knowing that.
The end result is a pseduo-intelligent black box that confidently hallucinates and can get confused by a single bad input - rendering it useless. Once it all goes wrong, then it gets re-trained / fine-tuned again, wasting more energy. Inference requiring tons of GPUs is also wasteful and inefficient.
The moment we switch to better and efficient alternative in training deep neural networks at scale, that I would call a 'breakthrough' in AI rather than throwing more data, more GPUs at the problem.