Sure but the lower hanging fruit is mostly squeezed, so what else is driving the idea of _job replacement_ if the next branch up of the tree is 3-5 years out? I've seen very little to indicate beyond tooling empowering existing employees a major jump in productivity but nothing close to job replacement (for technical roles). Often times it's still accruing various forms of technical debt/other debts or complexities. Unless these are 1% of nontechnical roles it doesn't make much sense other than their own internal projection for this year in terms of the broader economy. Maybe because they have such a larger ship to turn that they need to actually plan 2-3 years out? I don't get it, I still see people hire technical writers on a daily basis, even. So what's getting cut there?
What exactly would that evidence look like, for you?
It definitely increases some types of productivity (Opus one-shot a visualization that would have likely taken me at least a day to write before, for work) - although I would have never written this visualization before LLMs (because the effort was not worth it). So I guess it's Jevons Paradox in action somewhat.
In order to observe the productivity increases you need a good scale where the productivity would really matter (the same way that when a benchmark is saturated, like the AIME, it stops telling us anything useful about model improvement)
Productivity is by definition real output (usually inflation adjusted dollars) per unit of input. That could be per hour worked, or per representative unit of capital + labor mix.
I would accept an increase in the slope of either of these lines as evidence of a net productivity increase due to artificial intelligence (unless there were some other plausible cause of productivity growth speed up, which at present there is not).
First, I'd expect the trajectory of any new technology that purports to be the next big revolution in computing to follow a distribution pattern of that similar to the expansive use of desktop computing and productivity increases, such as the 1995-2005 period[0]. There has not been any indication of such an increase since 2022[1] or 2023[2]. Even the most generous estimation, which Anthropic itself estimated in 2025 the following
>Extrapolating these estimates out suggests current-generation AI models could increase US labor productivity growth by 1.8% annually over the next decade[3]
Which not only assumes the best case scenarios, but would fail to eclipse the height of the computer adoption in productivity gains over a similar period, 1995-2005 with around 2-2.5% annual gain.
Second is cost. The actual cost of these tools is multiples more expensive than it was to adopt computing en masse, especially since 1995. So any increase in productivity they are having is not driving overall costs down relative to the gains, in large part because you aren't seeing any substantial YoY productivity growth after adopting these AI tools. Computing had a different trend, as not only did it get cheaper over time, the relative cost was outweighed by the YoY increase of productivity.
[1]: First year where mass market LLM tools started to show up, particularly in the software field (in fact, GitHub Copilot launched in 2021, for instance)
[2]: First year where ChatGPT 4 showed up and really blew up the awareness of LLMs
Well you would think if there is increased productivity there would be at least a couple studies, some clear artifacts, or increased quality of software being shipped.
Except all we have is "trust me bro, I'm 100x more productive" twitter/blog posts, blant pre-IPO AI company marketing disguised as blog posts, studies that show AI decreases productivity, increased outages, more CVEs, anecdotes without proof, and not a whole lot of shipping software.
If that's the case I feel like you couldn't actually be using them or paying attention. I'm a big proponent and use LLMs for code and hardware projects constantly but Gemini Pro and ChatGPT 5.2 are both probably the worst state we've seen. 6 months ago I was worried but at this point I have started finding other ways to find answers to things. Going back to the stone tablets of googling and looking at Stackoverflow or reddit.
I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.
> I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.
I feel the same. They're better at some things yes, but also worse at other things. And for me, they're worse at my really important use cases. I could spend a month typing prompts into Codex or AntiGravity and still be left holding the bag. Just yesterday I had a fresh prompt and Geminin bombed super hard on some basic work. Insisting the problem was X when it wasn't. I don't know. I was super bullish but now I'm feeling far from sold on it.
The usual thing is that the market ends up around $0.95 for things like that, if the actors are all solid investors. It only takes one overly enthusiastic yes buyer to break that ceiling, the smart money won't "correct" it down to $0.95
There's another idea, which is make contacts that pay out in shares of an ETF, but I haven't seen this idea put into practice
Yep. Totally agreed that it's well defined. Only pointing out that the technical execution required will shift, which seems relevant because it's likely to make it take much longer than without this effect
Because before buybacks there were dividends. Did the difference between buybacks and dividends really make the difference between doing basic research and not?
It’s likely, dividends provide higher levels of exponential growth long term for an otherwise steady state company. It makes them more compelling than many long term investments.
Convert X% of a stocks value into a dividend and you pay taxes on that before you can buy more stock, but someone who keeps buying stock sees an exponential return. (Higher percentage of the company = larger dividends)
A company buys back X% of its stock functions like a dividend w/ stock purchase, but without that tax on dividends you’re effectively buying more stock. Adding a tax on stock buybacks could eliminate such bias, but it’s unlikely to happen any time soon.
Just because trillions are currently spent on employees, does not mean that another trillions exists to spend on AI. And if, instead, one's position is that those trillions will be spent on AI instead of employees, then one is envisioning a level of mass unemployment and ensuing violence that will result in heads on pikes.
I often do that frequently. I should do it, but forget to not fully proof read after a quick edit. I also regularly leave out n't a lot when changing where a negation happens (see above).
"In hindsight, the ad slogan 'Sunshine on your privacy' was a little too obvious, even for modern consumers. Let's Dazzle them with the next shiny thing instead."
OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.