Probably yes. Additionally the (probably more for AWS) won't be allowed to use it internally either. This will probably apply to all the top SaaS/software companies unilaterally.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
In the recent Supreme Court hearing over the firing of Lisa Cook from the Federal Reserve, the administration is acting like Truth Social posts are official notices.
>Several justices have noted the unusual nature of the case before it, which began with a post by Trump on his social media platform, Truth Social, that said he would fire Cook.
>Jackson wondered why that would be considered sufficient notice: “How is it that we can assume that she’s on social media?”
In certain professions it wasn't uncommon to spend $3k/year or more (in 2026 dollars) on software licenses - Adobe CS4/CS6 etc... with a handful of products easily pushed over that. In other professions. All sorts of other jobs require people to pay for their own tools as well.
What I get for $150/month I'd easily pay twice or more for that if I had to, even out of pocket if I had to for current functionality - even if was frozen in time. I'd imagine many, if not most, readers on hacker news would do the same. Multiplied across the entire population of software developers (and broader population using AI) - I think it's clear to see what AI is worth in a grounded way.
I hadn’t realized Hyperspace mountain in Disneyland Paris went upside down (and launched up) before I took my 6 year old on it - I was assuming it was just a replica of the disneyland one which I thought
He was a bit intimidated by the enhanced strapping, but he liked it still.
I think validation is already much easier using LLMs. Arguably this is one of the best use cases for coding LLMs right now: you can get claude to throw together a working demo of whatever wild idea you have without needing to write any code or write a spec. You don't even need to be a developer.
I don't know about you, but I'd much rather be shown a demo made by our end users (with claude) than get sent a 100 page spec. Especially since most specs - if you build to them - don't solve anyone's real problems.
Hm, how much real life experience do you have in delivering production SW systems?
Demo for the main flow is easy. The hard part is thinking through all the corner cases and their interactions, so your system robustly works in real world, interacting with the everyday chaos in a non-brittle fashion.
Well he said - anyone can (or will soon) vibe-program their own MS Word - there is no way he is a programmer, sorry. The complexity of these systems is crazy. Unless he meant ah HTML text area with "save" button - then sure, why not.
> The complexity of these systems is crazy. Unless he meant ah HTML text area with "save" button - then sure, why not.
What do you see as the difference between an LLM making an HTML text area and a save button, and an LLM making MS word? It just sounds like a scaling problem to me. We've been scaling computers since long before I was born. My first computer was a 386 with 4mb of ram. You needed a special add-in chip to enable floating point calculations. Now look at what we have.
As far as I can tell, the only difference between opus 4.6 and some future AI model that could code up MS word is a difference in scale. Are you betting against the entire computing (software and hardware) industry being unable to scale LLMs past their current point? That seems like a really bad bet to me. Especially seeing how far they've come in the last few years. Claude code can already do some quite complex tasks. I got it to write a simple web based email client for me yesterday. It took about an hour in total. It has some bugs, but the email client works.
We scaled hard drives. We scaled down silicon chips. We scaled digital camera sensors. And display resolutions. And networking bandwidth. We went from the palm pilot to the first iphone to modern phones. Do you really think we'll be unable to scale AI models?
>> industry being unable to scale LLMs past their current point
100% bet - no way any "AI" will be able to generate you anything close to a complex piece of software like Ms Word within reasonable time and budget. Given infinite time and money - sure, anything is possible, just like a trilling monkeys randomly printing "War and Peace" once in a trillion years in some remote galaxy. I don't even understand your confidence given how much guidance and hand holding LLMs need at the moment to produce anything useful.
Yep. Claude today? No way can it achieve this. It can barely write a working C compiler.
I'm looking at the trend line. A few years ago it couldn't make a simple webpage. Now it can make a bad C compiler in thousands of dollars of tokens. What does it look like in another few years? Or another 2 decades?
Hard disagree, clients/users often don't know what the best/right solution is, simply because they don't know what's possible or they haven't seen any prior art.
I'd much rather have a conversation with them to discuss their current problems and workflow, then offer my ideas and solutions.
> The second part is going to be the hard part for complex software and systems.
Not going to. Is. Actually, always has been; it isn’t that coding solutions wasn’t hard before, but verification and validation cannot be made arbitrarily cheap. This is the new moat - if your solutions require time consuming and expensive in dollar terms qa (in the widest sense), it becomes the single barrier to entry.
There was recent discussion about how making AI to write the validation for the code is a good approach. If you have formal proofs for your code, your QA needs go down.
There will continue to be new gas plants as long as there are coal plants which will be converted, usually around the time a major overhaul would need to be taken anyway.
Additionally, every major university will undoubtedly have to terminate the use of Claude. First on the list will be universities that run labs under DOD contracts (e.g. MIT, Princeton, JHU), DOE contracts (Stanford, University of California, UChicago, Texas A&M, etc...), NSF facilities (UIUC, Arizona, CMU/Pitt, Purdue), NASA (Caltech).
Following that it will be just those who accept DOD/DOE/NSF grants.
reply