Hacker Newsnew | past | comments | ask | show | jobs | submit | louiereederson's commentslogin

Apple is a two sided market between developers and users. OpenAI has not succeeded in building this so far.

This article is bad. It is mixing up capex and opex. OpenAI is projecting more spending on compute through their income statement now than they were 6 months ago.

Seems like code production is up but mainly in feature branches. Main branch changes are down. MTTR deteriorating. This is for all but the highest performing (top 5%ile) teams.

This has to be driven by AI adoption. I'm trying to keep an open mind, but I can see how this is productive. Maybe the optimistic take is the top 5% are showing the way and the remainder will follow? That seems a bit too optimistic though, because the bottom 95% are probably burdened by technical/organizational debt that limits their ability to adapt quickly enough.


I think this was the best write up on the impact of AI on software engineering I've read yet. By extension it might be the best 'take' on AI I've read period.

Alas it is not written by AI and boosted on X (which is owned by an LLM model vendor), and therefore will not get 80m views or whatever.


Care to elaborate?

Sure. If you turn on "show dead" you will see half a dozen green-named (i.e., recently established) accounts that are obviously "agents". They're clogging up the pipe with noise. We as a collective are well-positioned to fight back and help protect the commons from the monster we have created.

It's even worse. They're not limited to new accounts. I've seen a lot of bots now from accounts that are literally years old but with zero activity that suddenly start posting a lot of comments within a span of 24 to 48 hours. I have some examples of them if you search my recent comments.

Welp, I just might get flagged by your method then. I lurk extensively on this site. I haven’t figured out how to “fit in”.

You would not. You don't normally post lots of comments. The occasional return after a long period of inactivity is not in itself suspicious.

I've seen this too. What's confusing is they don't seem to be accomplishing anything? They're not pushing products

What's the point? To prime the account for later?


"can the bot army push average opinion x% in this innocuous topic?" it could very easily be a/b testing a propaganda system.

I am simultaneously grateful that you told us about this, and also kind of wish I didn't know. There's so much.

Wow thank you, I didn't know about this feature

I know they acknowledge this but measuring autonomy by looking at task length of the 99.9th percentile of users is problematic. They should not be using the absolute extreme tail of usage as an indication of autonomy, it seems disingenuous. Does it measure capability, or just how extreme users use Claude? It just seems like data mining.

The fact that there is no clear trend in lower percentiles makes this more suspect to me.

If you want to control for user base evolution given the growth they've seen, look at the percentiles by cohort.

I actually come away from this questioning the METR work on autonomy.

You can see the trend for other percentiles at the bottom of this, which they link to in the blog post https://cdn.sanity.io/files/4zrzovbb/website/5b4158dc1afb211...


Referring to my earlier comment, you need to have a model for how to account for training costs. If Anthropic stops training models now, what happens to their revenues and margins in 12 months?

There's a difference between running inference and running a frontier model company.


Training costs are fixed. You spend $X-bn training a model and that single model then benefits all of your customers.

Inference costs grow with your users.

Provided you are making a profit on that inference you can eventually cover your training costs if you sign up enough paying customers.

If you LOSE money on inference every new customer makes your financial position worse.


Your ability to sign up enough customers is directly related to your ability to sustain training costs. The model runs have a short lifespan. They may serve many customers at a given point in time per run, but in order to serve those customers over time you need to continually spend on training.

I think your mental model for an LLM vendor is similar to a foundry (i.e. TSMC). They spend a bunch of R&D on developing leading edge nodes and build foundries. That in your mental model would be similar to training costs.

My point is the correct mental model is more like (but not exactly like) a SaaS company, ironically. SaaS unit economics are a function of gross margin, churn and acquisition costs, i.e. Revenue x gross margin / churn - CAC. My point is some element (maybe the entirety) of training costs are more like CAC than they are like TSMC's R&D and capex. The question to ask to test this view is: is what happens to OpenAI or Anthropic revenue in 2027 or 2028 if they stop spending on training today? My view is it'll drop precipitously. This implies churn is very high. It is true that training costs can be spread over customers though, so the analogy breaks down there, but I think it is a better mental model than the foundry one.


Yes, you need to continue spending money on training. But you don't need to spend MORE money on training just because you signed up more customers.

Anthropic reduced their gross margin forecast per external reporting (below) to 40%, and have exceeded internal forecasts on inference costs. This does not take into account amortized training costs which are substantial (well over 50% of revenue) and accounted for as occurring below gross profit. If you view training as a cost of staying in the game, then it is justifiable to view it as at least a partially variable cost that should be accounted for in gross margin, particularly given that the models stay on leading edge for only a few months. If that's the case then gross margins are probably minimal, maybe or negative.

https://www.theinformation.com/articles/anthropic-lowers-pro...


Big tech might ahead of the rest of the economy in this experiment. Microsoft grew headcount by ~3% from June 2022 to June 2025 while revenue grew by >40%. This is admittedly weak anecdata but my subjective experience is their products seem to be crumbling (GitHub problems around the Azure migration for instance), and worse than they even were before. We'll see how they handle hiring over the next few years and if that reveals anything.


Well, Google just raised prices by 30% on the GSuite "due to AI value delivered", but you can't even opt out, so even revenue is a bullshit metric.


You say this with such confidence and then ask if smaller chips require smaller wafers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: