Totally this. But the corp labs have incentives to keep researching per investors and staffing load, so they have to show work.
I guess a nice advantage of backwardness here is that economic opportunities exist for those who can solve pain points in the use of existing intel. Older models often do almost as well at agentic tasks in reality, can probably go further.
Still, AGI should remove a lot of this making it redundant, and it will then be more about the intel than the tooling. But an opportunity exists now. We may not have widespread AGI until 8 - 10 years later, so plenty of money to be made in the meantime.
Ya definitely, that makes total sense. It feels to me that currently the labs have great researchers, who only care about making models perform better across raw intel and then they have incompetent applied AI engineers / FDE's who can only suggest using better prompting to remove bad habits to make agents more usable.
I guess a nice advantage of backwardness here is that economic opportunities exist for those who can solve pain points in the use of existing intel. Older models often do almost as well at agentic tasks in reality, can probably go further.
Still, AGI should remove a lot of this making it redundant, and it will then be more about the intel than the tooling. But an opportunity exists now. We may not have widespread AGI until 8 - 10 years later, so plenty of money to be made in the meantime.