Hacker Newsnew | past | comments | ask | show | jobs | submit | computerphage's commentslogin

Why isn't the AI story believable? It seems to me that AI is getting more and more productive


Sure but the lower hanging fruit is mostly squeezed, so what else is driving the idea of _job replacement_ if the next branch up of the tree is 3-5 years out? I've seen very little to indicate beyond tooling empowering existing employees a major jump in productivity but nothing close to job replacement (for technical roles). Often times it's still accruing various forms of technical debt/other debts or complexities. Unless these are 1% of nontechnical roles it doesn't make much sense other than their own internal projection for this year in terms of the broader economy. Maybe because they have such a larger ship to turn that they need to actually plan 2-3 years out? I don't get it, I still see people hire technical writers on a daily basis, even. So what's getting cut there?


Is there any quantitative evidence for AI increasing productivity? Other than AI influencer blog posts and pre-IPO marketing from AI companies?


What exactly would that evidence look like, for you?

It definitely increases some types of productivity (Opus one-shot a visualization that would have likely taken me at least a day to write before, for work) - although I would have never written this visualization before LLMs (because the effort was not worth it). So I guess it's Jevons Paradox in action somewhat.

In order to observe the productivity increases you need a good scale where the productivity would really matter (the same way that when a benchmark is saturated, like the AIME, it stops telling us anything useful about model improvement)


"What exactly would that evidence look like, for you?"

https://fred.stlouisfed.org/series/MFPPBS https://fred.stlouisfed.org/series/OPHNFB

Productivity is by definition real output (usually inflation adjusted dollars) per unit of input. That could be per hour worked, or per representative unit of capital + labor mix.

I would accept an increase in the slope of either of these lines as evidence of a net productivity increase due to artificial intelligence (unless there were some other plausible cause of productivity growth speed up, which at present there is not).


There are two sides to this that I see:

First, I'd expect the trajectory of any new technology that purports to be the next big revolution in computing to follow a distribution pattern of that similar to the expansive use of desktop computing and productivity increases, such as the 1995-2005 period[0]. There has not been any indication of such an increase since 2022[1] or 2023[2]. Even the most generous estimation, which Anthropic itself estimated in 2025 the following

>Extrapolating these estimates out suggests current-generation AI models could increase US labor productivity growth by 1.8% annually over the next decade[3]

Which not only assumes the best case scenarios, but would fail to eclipse the height of the computer adoption in productivity gains over a similar period, 1995-2005 with around 2-2.5% annual gain.

Second is cost. The actual cost of these tools is multiples more expensive than it was to adopt computing en masse, especially since 1995. So any increase in productivity they are having is not driving overall costs down relative to the gains, in large part because you aren't seeing any substantial YoY productivity growth after adopting these AI tools. Computing had a different trend, as not only did it get cheaper over time, the relative cost was outweighed by the YoY increase of productivity.

[0]: https://www.cbo.gov/sites/default/files/110th-congress-2007-...

[1]: First year where mass market LLM tools started to show up, particularly in the software field (in fact, GitHub Copilot launched in 2021, for instance)

[2]: First year where ChatGPT 4 showed up and really blew up the awareness of LLMs

[3]: https://www.anthropic.com/research/estimating-productivity-g...


Well you would think if there is increased productivity there would be at least a couple studies, some clear artifacts, or increased quality of software being shipped.

Except all we have is "trust me bro, I'm 100x more productive" twitter/blog posts, blant pre-IPO AI company marketing disguised as blog posts, studies that show AI decreases productivity, increased outages, more CVEs, anecdotes without proof, and not a whole lot of shipping software.


If that's the case I feel like you couldn't actually be using them or paying attention. I'm a big proponent and use LLMs for code and hardware projects constantly but Gemini Pro and ChatGPT 5.2 are both probably the worst state we've seen. 6 months ago I was worried but at this point I have started finding other ways to find answers to things. Going back to the stone tablets of googling and looking at Stackoverflow or reddit.

I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.


> I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.

I feel the same. They're better at some things yes, but also worse at other things. And for me, they're worse at my really important use cases. I could spend a month typing prompts into Codex or AntiGravity and still be left holding the bag. Just yesterday I had a fresh prompt and Geminin bombed super hard on some basic work. Insisting the problem was X when it wasn't. I don't know. I was super bullish but now I'm feeling far from sold on it.


Ai is definitely able to sling out more and more lines of code, yes. Whether those LOC are productive...?


Tomorrow's Calc app will have 30mil lines of code and 1000 npm dependencies!


and 2+2 will output 4 almost all the time.. just like a human would.


What's your point?


The usual thing is that the market ends up around $0.95 for things like that, if the actors are all solid investors. It only takes one overly enthusiastic yes buyer to break that ceiling, the smart money won't "correct" it down to $0.95

There's another idea, which is make contacts that pay out in shares of an ETF, but I haven't seen this idea put into practice


that's correct. Also Kalshi does pay out interest on, and Poly does on a few markets


Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models?


they're incorrect about the routing statement but it is not a newly trained model


I think "blockbuster movie" is a moving target, so it's a bit hard to know


It's a relatively well defined measure of success though: a movie which is popular and high-grossing.


Yep. Totally agreed that it's well defined. Only pointing out that the technical execution required will shift, which seems relevant because it's likely to make it take much longer than without this effect


Because before buybacks there were dividends. Did the difference between buybacks and dividends really make the difference between doing basic research and not?


It’s likely, dividends provide higher levels of exponential growth long term for an otherwise steady state company. It makes them more compelling than many long term investments.

Convert X% of a stocks value into a dividend and you pay taxes on that before you can buy more stock, but someone who keeps buying stock sees an exponential return. (Higher percentage of the company = larger dividends)

A company buys back X% of its stock functions like a dividend w/ stock purchase, but without that tax on dividends you’re effectively buying more stock. Adding a tax on stock buybacks could eliminate such bias, but it’s unlikely to happen any time soon.


Indeed. There are trillions of dollars /per year/ paid to workers in the US alone.


Like, there is an argument that can be made here, but "there's just not enough money in the world to justify this" definitely isn't it


Just because trillions are currently spent on employees, does not mean that another trillions exists to spend on AI. And if, instead, one's position is that those trillions will be spent on AI instead of employees, then one is envisioning a level of mass unemployment and ensuing violence that will result in heads on pikes.


"due to privacy concerns about privacy"

This strikes me as a particularly funny typo


Probably wrote "due to concerns about privacy" then realized it should be "due to privacy concerns" and forgot to remove the original bit.

Many such cases.


I often do that frequently. I should do it, but forget to not fully proof read after a quick edit. I also regularly leave out n't a lot when changing where a negation happens (see above).


Definitely not using Apple's epic proofread feature.


"In hindsight, the ad slogan 'Sunshine on your privacy' was a little too obvious, even for modern consumers. Let's Dazzle them with the next shiny thing instead."


"Sometimes" and "sense" are both wrong. I don't think this library is very good


Why do you think systems need to be sentient to be risky?


OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.


I understood that from the context, but my question stands. I'm asking why OP thinks that sentience is necessary for risk in AI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: