> If anything there fast reduction in value makes them less attractive.
Right. And if you buy a secondhand one you are increasing their value on the secondhand market. Reducing the depreciation increases the value of the brand new phone.
Create a vibe-coded demo -> showcase it with a faked/overblown video and an "It's not X It's Y. Read thread!!11!" type engagement bait -> Get LLM comments all riled up and excited in the comments to fake hype -> sell a course/LLM wrapper.
We do sort of have that with the capabilities stuff (although I admit hardly anyone knows how to use it).
But the tricky part is that "reading files" is done all the time in ways you might not think of as "reading files". For example loading dynamic libraries involves reading files. Making network connections involves reading files (resolv.conf, hosts). Formatting text for a specific locale involves reading files. Working out the timezone involves reading files.
Even just echoing "hello" to the terminal involves reading files:
Capabilities are craaaazy coarse on Linux. Really only a small piece of the sandboxing puzzle. Flatpak, Bubblewrap, and Firejail each provide an overall fuller view of what sandboxing can be.
He's counting out like 6 at a time. He needs a fast way to pick small quantities precisely, not a fast way to check large quantities. Once they're picked they're easily verified by eye.
Okay, then imagine you overhear at a bar. Yes “anyone could have” theoretically, but not actually. In either case, you have material non-public information that your counterparty in the market does not.
you got that piece of non public information was not because you are an insider. As long as the bar is not exclusive to insider, i don't see any difference
Isn't it exclusive to people who live in the area of the bar?
What if the bar has a cover charge, so only those who pay get in?
What if the cover charge is $10,000 and the bar is advertised as "the place where public company execs love to come talk to each other about private deals"?
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
Because the people who are consistently right will consistently win money and will make bigger bets which move the price more, in the limit case making the price converge on the true probability of the outcome.
This is the theoretical underpinning of prediction markets.
Equating being "consistently right" with having a sufficiently large stash of capital is ludicrous.
"right" people will wisely take most their winnings out of a high-variance market. "wrong" people with deep pockets (or lots of wrong people with shallow pockets) will continue to distort the market.
they can only do so as long as they have enough capital to lose. Because every time they try to move the betting markets against the truth, they will simply lose that money when the event happens (and turns out they were wrong).
So any distortion will merely be temporary. Unless they have access to unlimited capital of course - which is not true yet for anyone (but the US gov't).
I have been in dozens of meetings over the past year where directors have told me to use AI to enable us to fire 100% of our contract staff.
I have been in meetings where my director has said that AI will enable us to shrink the team by 50%.
Every single one of my friends who do knowledge work has been told that AI is likely to make their job obsolete in the next few years, often by their bosses.
You don' have to look past this very forum, most people here seem to be very positive about gen AI, when it comes to software development specifically.
Lots of folk here will happily tell you about how LLMs made them 10x more productive, and then their custom agent orchestrator made them 20x more productive on top of that (stacking multiplicatively of course, for a total of 200x productivity gain).
I don't know what is your bubble, but I'm a regular programmer and I'm absolutely excited even if a little uncomfortable. I know a lot of people who are the same.
I am using AI a lot to do tasks that just would not get done because they would take too long. Also, getting it to iterate on a React web application meant I can think about what I want it to do rather than worry about all the typing I would have to do. Especially powerful when moving things around, hand-written code has a "mental load" to move that telling an AI to do it does not.
Obviously not everything is 100% but this is the most productive I have felt for a very long time. And I've been in the game for 25 years.
Why do you need to move things around? And how is that difficult?
Surely you have an LSP in your editor and are able to use sed? I've never had moving files take more than fifteen minutes (for really big changes), and even then most of the time is spent thinking about where to move things.
LLM's have been reported to specifically make you "feel" productive without actually increasing your productivity.
I mean there are two different things. One is whether there are actual productivity boosts right now. And the second is the excitement about the technology.
I am definitely more productive. A lot of this productivity is wasted on stuff I probably shouldn't be writing anyways. But since using coding agent, I'm both more productive at my day job and I'm building so many small hobby projects that I would have never found time for otherwise.
But the main topic of discussion in this thread is the excitement about technology. And I have a bit mixed feelings, because on one hand side I feel like a turkey being excited for the Thanksgiving. On the other hand, I think the programming future is bright. there will be so much more software build and for a lot of that you will still need programmers.
My excitement comes from the fact that I can do so much more things that I wouldn't even think about being able to do a few months ago.
Just as an example, in last month I have used the agents to add features to the applications I'm using daily. Text editor, podcast application, Android keyboard. The agents were capable to fork, build, and implement a feature I asked for in a project where I have no idea about the technology. Iif I were hired to do those features, I would be happy if I implemented them after two weeks on the job. With an agent, I get tailor made features in half of a morning. Spending less than ten minutes prompting.
I am building educational games for my kids.
They learn a new topic at school? Let me quickly vibe the game to make learning it fun. A project that wouldn't be worth my weekend, but is worth 15 minutes. https://kuboble.com/math/games/snake/index.html?mode=multipl...
So I'm excited because I think coding agents will be for coding what pencil and paper were for writing.
I don't understand the idea that you "could not think about implementing a feature".
I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer. Especially for the kinds of applications you mention.
Also it sounds less like you're productive and more like the vibeslop projects are distracting you.
I produce more good (imo) production features despite being distracted.
The features I mention is something that I would be able to do, but only with a lot of learning and great effort - so in practical terms I would not.
It is probably a skill issue but in the past many times I downloaded the open source project and just couldn't build and run it. Cryptic build errors, figuring out dependencies. And I see claude gets the same errors but he just knows how to work around those errors.
Setting up local development environment (db, dummy auth, dummy data) for a project outside of my competence area is already more work than I'm willing to do for a simple feature. Now it's free.
>I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer.
Yes. I'm my area of competence it can do the coding tasks I know exactly how to do just a bit faster. Right now for those tasks I'd say it can one shot code that would take me a day.
But it enables me to do things in the area where I don't have expertise. And getting this expertise is very expensive.
I have a large C# application. In this application I have a functionality to convert some group of settings into a tree model (a list of commands to generate this tree). There are a lot of weird settings and special cases.
I asked claude to extract this logic into a separate python module.
It succesfully one-shot that, and I would estimate it as 2 days work for me (and I wrote the original C# code).
This is probabaly the best possible kind of task for the coding agents, given that it's very well defined task with already existing testcases.
reply