But closed models are clearly slowing. It seems reasonable to expect that as open weight models reach the closed weight model sizes they’ll see the same slowdown.
Well, it's in academia, in traditional universities, any way. I think corporates are still thriving. I can say from an academic point of view, I knew 4 PhDs who started in 2018/2019, all 4 got depressed and left the field.
Their research was obsolete before they were halfway through.
Usually some PhD students get depressed, but these 4 had awful timing. Their professors were stuck on 3-10 year grants doing things like BERT finetuning or convolution or basic level AI work - stuff that as soon as GPT-3 came out, was clearly obsolete, but nobody could admit that and lose the grants.. In other cases, their work had value, but drew less attention than it should have became all attention went to GPT-3 or people assumed it was just some wrapper technology.
The nature of academia and the incentive system caused this; academia is a cruise ship which is hard to turn. If the lighthouse light of attention moves off your ship on to another fancy ship, your only best is lifeboats(industry) or hoping the light and your ship intersect again.
The professors have largely decided to steer either right into Generative AI and using the larger models (which they could never feasibly train themselves) for research, or gone even deeper into basic AI.
The problem? The research grants are all about LLMs, not basic AI.
So basically a slew of researchers willing and able to take on basic AI research are leaving the field now. As many are entering as usual ofcourse, but largely on the LLM bandwagon.
That may be fine. The history of AI winters suggests putting all the chips on the same game like this is folly.
I recall journals in the 90s and 2000s (my time in universities was after they were released, but I read them), the distribution of AI was broad. Some GOFAI, some neural nets, many papers about filters or clear visual scene detection etc. Today it's largely LLM or LM papers. There is not much of a "counterweight underdog" like neural networks served the role off in the 90s/00s.
At the same time, for people working in the fields you mention, double check the proportion of research money going into companies vs institutions. While it is true things like TortoiseTTS[1] were an individual effort, that kind of thing is now a massive exception. In stead companies like OpenAI/Google literally have 1000+ researchers each developing the cutting edge in about 5 fields. Universities have barely any chance.
This is how the DARPA AI winter went to my understanding(and I listened to one of the few people who "survived via hibernation" during my undergraduate); over promising - central focus on one technology - then company development of projects - government involvement - disappointment - cancellation.
Why care about research grants? It's all about publishing at NeurIPS/competitors or ACL/competitors. Let the industry pay you 3x what you'd fight for in grants and reap the rewards of lots of citations.
Those same industry companies are GPU rich too, unlike most of academia (though Christopher Manning claims that Princeton has lots of GPUs even though Stanford doesn't!)