Nah, there's a lot of truth to it. Phone snatching and looting is common but underreported because nothing ever happens. I've travelled quite a lot and other than Brussels, London (my home city) is the only one where I'm cowering and shielding my phone for fear it will be snatched. I can't leave my bag or laptop out of sight for even a second.
Petty crime chips away at society by eroding trust, it needs to be punished Singapore style.
With Claude Code the problem of changes outside of your view is twofold: you don't have any insight into how the model is being ran behind the scenes, nor do you get to control the harness. Your best hope is to downgrade CC to a version you think worked better.
I don't see how this can be the future of software engineering when we have to put all our eggs in Anthropic's basket.
Hard to feel the same sympathy for Russian men to be honest, I've seen many gallivanting abroad, whilst majority of Ukrainian men are stuck either in hiding in their own country or have been sent to the front lines. Only a few got out early or by paying bribes.
honestly i am happy for the russian and ukranian young men and women i meet here in NL each day. Glad for them they can dodge the draft. most simply drove out, some had more hastle than others.
war is shit on all sides and thinking one or the other suffers less because you dont like their colours is very short sighted.... i think we had enough time by now to realise it.
and dont call it cowardice if someone doesnt want to fight for a bunch of 'rich pricks' playin with their money while normal people get to die in the streets. It has never been good or normal and should never be.
It's objectively worse on the Ukrainian side. Imagine you haven't been able to leave your house in 4 years for fear you'll be grabbed by a draft officer. Russians do not know this fear.
To boot, many Russian men have been paid handsomely for their participation in the SMO and get to live nice lives abroad.
Did you just forget about the mobilization drive Russia had in 2022, where they grabbed young men off streets and from their houses?
It was very unpopular, lead to people fleeing the country, and was pushed out of the public eye as soon as they figured out how to forcefully volunteer people instead.
Nobody grabbed anyone. It was an unusual, but otherwise a normal bureaucratic process. Got handed a paper, signed, have to appear. Many probably didn't have plans to go voluntarily, but felt it unmanly to dodge. I was at one of such sites and saw a man who got there too drunk and was handed over to the police; he was very disappointed he is not allowed go with the fellas.
It wasn't hard to dodge; you could just refuse to take the papers pretending it's not you or get sick the very day or something like that. The system had a number and once it was reached (very quickly) no further action was necessary. The only change so far us that the employers started to follow their military tracking procedure to the letter; before that it was required but not really enforced, but now all the paperwork gets done by the book.
Some people indeed left the country but those are the kind you don't want to have your back anyway.
Forceful volunteering is pure imagination. At most it's intensive persuasion or a new way to get out of jail, but if you don't want to go, nobody will force you.
It's not like it's zero-sum though; the world outside Russia and as Ukraine isn't going to become so full that there's no room for more or them to leave to dodge fighting in a war, so the parent commenter can easily be happy for any of them regardless of their country of origin.
Yeah, I gained a habit of constantly checking over my shoulders because of the people who will speed past you on e-bikes with very little room. Even parents with their kid in the back ride like mad.
Yeah, this model where you don't get an editor anymore feels like a step backwards. I don't want to give up LSPs, being able to step into/rename functions and stuff like that. I should still be the one in control of the code - the agent is the assistant, not me.
This is why Zed's direction felt pretty strong to me. Unfortunately their agentic features are kind of stagnating and the ACP extensions are riddled with issues.
We're building DevSwarm, and it's aiming to strike the balance between agentic coding in parallel without losing your IDE. Each workspace (worktree) gets a dedicated vscode instance, and in that instance we make it easy to fire up Claude Code, Codex, etc. Would love to hear if it hits the sweet spot we're going for.
I actually run a custom fork of Zed based on their master branch because of how stagnated the built-in agent is. Master branch Zed agent did get sub-agents, parallel threads, better thread management, and worktrees though, and I implemented agent skills and the ability to select which model to use for sub-agents for it. And with those features, I'm fairly satisfied.
This is why I use Claude Code though, it pairs well with a regular old text editor (in my case Sublime). I've always had an editor and a terminal open, plugging an AI into my terminal has been a fantastic enhancement to my work without really anything else changing or giving up any control.
No but I have now. It’s hard to tell from that few seconds but it doesn’t look like it’s really putting the developer in the driving seat, just providing a minimal escape hatch for manual edits.
It's still a very nice and fast editor, and you can just switch off those AI features. They're still releasing features and fixes for the non-AI parts.
Installing the CA requires jumping through some hoops, but yes, intercepting traffic for apps that don’t use cert pinning isn’t that difficult on iOS.
Apps that do use cert pinning is a whole other matter, I’ve tried unsuccessfully a few times to inspect things like banking apps. Needs a rooted device at the minimum.
ANTI_DISTILLATION_CC
This is Anthropic's anti-distillation defence baked into Claude Code. When enabled, it injects anti_distillation: ['fake_tools'] into every API request, which causes the server to silently slip decoy tool definitions into the model's system prompt. The goal: if someone is scraping Claude Code's API traffic to train a competing model, the poisoned training data makes that distillation attempt less useful.
I was thinking just yesterday that the research that Anthropic was sharing regarding how it's easy to poison training was unlikely to be conducted out of goodness of the heart.
This made me think of something - at work, if we wfh, we have to use one of those MITM proxies that intercept HTTPS at the kernel level. Imo such a thing can easily read the traffic and thus is indistinguishable from a distillation attempt from CC's PoV. I've had CC freak out on my machine, and sometimes generate pretty bad results, the CoT is often also not available.
I wonder it CC thinks I'm trying to distill the model. This is a common enough use case that I think the devs at Anthropic should consider.
Haven’t looked at the code, but is the server providing the client with a system prompt that it can use, which would contain fake tool definitions when this is enabled? What enables it? And why is the client still functional when it’s giving the server back a system prompt with fake tool definitions? Is the LLM trained to ignore those definitions?
Wonder if they’re also poisoning Sonnet or Opus directly generating simulated agentic conversations.
Claude Code has a server-side anti-distillation opt-in called fake_tools, but the local code does not show the actual mechanism.
The client sometimes sends anti_distillation: ['fake_tools'] in the request body at services/api/claude.ts:301
The client still sends its normal real tools: allTools at services/api/claude.ts:1711
If the model emits a tool name the client does not actually have, the client turns that into No such tool available errors at services/tools/StreamingToolExecutor.ts:77 and services/tools/toolExecution.ts:369
If Anthropic were literally appending extra normal tool definitions to the live tool set, and Claude used them, that would be user-visible breakage.
That leaves a few more plausible possibilities:
Fake_tools is just the name of the server-side experiment, but the implementation is subtler than “append fake tools to the real tool list.”
or
The server may inject tool-looking text into hidden prompt context, with separate hidden instructions not to call it.
or
The server may use decoys only in an internal representation that is useful for poisoning traces/training data but not exposed as real executable tools.
We do know that Anthropic has the ability to detect when their models are being distilled, so there could be some backend mechanism that needs to be tripped to observe certain behaviour. Not possible to confirm though.
We can be used to refer to people in general, and we know because Anthropic published a post called "Detecting and preventing distillation attacks" a month ago, while calling out 3 AI labs for large scale distillation
It made me raise my eyebrows when everyone was rushing to jump to Claude because OpenAI agreed to work with the DoW. Both companies are just as shitty as each other and will resort to underhanded tactics to stay on top.
Go China to be honest. They're the most committed to open AI research and they have more interesting constraints to work under, like restricted access to NVIDIA hardware.
But then the Chinese government has the ultimate say and, propaganda aside, if they don’t like what your product is you might suddenly lose access to said LLM provider.
You're perfectly free to scrape the web yourself and train your own model. You're not free to let Anthropic do that work for you, because they don't want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff.
Stole? Courts have ruled it's transformative, and it very obviously is.
AI doomerism is exhausting, and I don't even use AI that much, it's just annoying to see people who want to find any reason they can to moan.
> Stole? Courts have ruled it's transformative, and it very obviously is.
The courts have ruled that AI outputs are not copyrightable. The courts have also ruled that scraping by itself is not illegal, only maybe against a Terms of Service. Therefore, Anthropic, OpenAI, Google, etc. have no legal claim to any proprietary protections of their model outputs.
So we have two things that are true:
1) Anthropic (certainly) violated numerous TOS by scraping all of the internet, not just public content.
2) Scraping Anthropic's model outputs is no different than what Anthropic already did. Only a TOS violation.
Nobody is saying they can't try to stop you themselves. That's where the Terms of Service violation part comes in. They can cancel your account, block your IP, etc. They just can't legally stop you by, for instance, compelling a judge to order you to stop.
The Supreme Court already ruled on this. Scraping public data, or data that you are authorized to access, is not a violation of the Computer Fraud and Abuse Act.
Now, if you try to get around attempts to block your access, then yes you could be in legal trouble. But that's not what is happening here. These are people/companies that have Claude accounts in good standing and are authorized by Anthropic to access the data.
Nobody is saying that Anthropic can't just block them though, and they are certainly trying.
> You're perfectly free to scrape the web yourself and train your own model.
Actually, not anymore as a result of OpenAI and Anthropic's scraping. For example, Reddit came down hard on access to their APIs as a response to ChatGPT's release and the news that LLMs were built atop of scraping the open web. Most of the web today is not as open as before as a result of scraping for LLM data. So, no, no one is perfectly free to scrape the web anymore because open access is dying.
Rich people aren't going to find themselves needing to sleep under a bridge, so the law really only exists as a constraint on the poor. Duh. The flex that "well a rich guy couldn't do it either" is A) at best a myopic misunderstanding perpetuated by out of touch people and B) hopelessly naive, because anny punishment for the rich guy actually sleeping under a bridge is so laughably small it may as well not even exist. Hence, the whole bit of "a legal system to keep these accountable, but not for me".
Okay, you explained what Anatole France meant, which is probably helpful for those few who didn't get it from the quote itself. Perhaps now you can explain what on earth this has to do with Anthropic not wanting to let other for-profit businesses mooch off its investment of time, brainpower and money?
You explained what “rich and poor are equally forbidden from sleeping under bridges” means, but not what this has to do with the statement that one is free to do their own scraping and training, which I’m pretty sure is what kspacewalk was asking.
Try this: If you want to train a model, you’re free to write your own books and websites to feed into it. You’re not free to let others do that work for you because they don’t want you to, because it cost them a lot of time and money and secret sauce presumably filtering it for quality and other stuff.
I introspect all the time. I just disagree with you so I have thin skin? Lol.
I think it's transformative. I also think that it's a net positive for society. I lastly think that using freely available, public information is totally fair game. Piracy not so much, but it's water under the bridge.
I hope you introspect some day, too, and realize it's acceptable for people to have different views than you. That's why I don't care; you aren't going to change my mind and I can't change yours either, so it's moot and I don't care to argue about it further.
You had appeared to scuttle off but alas I was wrong (and sorry to imply you are a crab of some sort) however your comment followup on not changing minds might be a tad shell-ish. I'm open minded actually on the issue and these are major issues of our time. I'm personally impacted by this and it does make me wonder "will I write X thing again" and it is a very hard question to answer frankly. When you see your works presented in summary on search and a major decline in traffic you really do think about that. It impacts my ability to make money as I once did prior to 2024 (when it really hit) without doubt. Edit/spelling
Let's talk ethics, not law. Why is it okay for these companies to pirate books and scrape the entire web and offer synthesized summaries of all of it, lowering traffic and revenue for countless websites and professions of experts, but it is not okay for others to try to do the same to an AI model?
Is the work of others less valid than the work of a model?
I don't think anyone's saying it's not okay - I think the point is that Anthropic has every right to create safeguards against it if they want to - just like the people publishing other information are free to do the same.
And everyone is free to consume all the free information.
>Why is it okay for these companies to pirate books
Courts have ruled it's not, and I don't think anyone is arguing it's okay.
>but it is not okay for others to try to do the same to an AI model?
The steelman version is that it's okay to do it once you acquired the data somehow, but that doesn't mean anthropic can't set up roadblocks to frustrate you.
Your selective respect for work is a glaring double standard. The effort to produce the original content they scraped is order of magnitudes bigger than what it took to train the model, so if this wasn't enough to protect the authors from Anthropic it shouldn't be enough to protected Anthropic from people distillating their models.
Your legal argument is all over the place as well. What is more relevant here: what the courts ruled or what you consider obvious? How is distillation less transformative than scraping? How does courts ruling that scraping to train models is legal relate to distillation?
Nobody is scoring you on neutrality points for not using AI much and calling this doomerism is just a thought-terminating cliche that refuses to engage with the comment you're replying.
In fact, your comment is not engaging with anything at all, you're vaguely gesturing towards potentitial arguments without making them. If you find discussing this exhausting then don't but also don't flood the comments with low effort whining.
reminds me of `don't look up` a bit. there clearly is an imbalance in regards to licenses with model providers, not even talking about knowledge extraction (yes younger people don't learn properly now, older generations forget) shortly before the rug-pull happens in form of accessibility to not rich people
Settled out of court does not mean the lawsuit never went to court. It means the settlement happened outside of court. Every lawsuit has to go to court, that's how you file a lawsuit. If it isn't sent to a court it's just words in a document.
It's not really paranoia if it's happening a lot. They wrote a blog post calling several major Chinese AI companies out for distillation.[0] Perhaps it is ironic, but it's within their rights to protect their business, like how they prohibit using Claude Code to make your own Claude Code.[1]
Their business shouldn't exist. It was predisposed on non-permissive IP theft. They may have found a judge willing to cop to it not being so, but the rest of the public knows the real score. And most problematically for them, that means the subset of hackerdom that lives by tit-for-tat. One should beware of pissing off gray-hats. Iit's a surefire way to find yourself heading for bad times.
I would say not all that ironic. Book publishers, Reddit, Stackoverflow, etc., tried their best to attract customers while not letting others steal their work. Now Anthropic is doing the same.
Unfortunately (for the publishers, at least) it didn't work to stop Anthropic and Anthropic's attempts to prevent others will not work either; there has been much distillation already.
The problem of letting humans read your work but not bots is just impossible to solve perfectly. The more you restrict bots, the more you end up restricting humans, and those humans will go use a competitor when they become pissed off.
It's really just tech culture like HN that obsesses over solving problems perfectly. From seat belts to DRM to deodorant, most of the world is satisfied with mitigating problems.
No, it's ethical people pointing out that if you toss aside ethics for success at all costs, you aren't going to find any sympathy when people start doing the same thing back to you. Live by the sword, die by the sword, as they say.
There is a reason we don't do things. That reason is it makes the world a worse place for everyone. If you are so incredibly out of touch with any semblance of ethics at all; mayhaps you are just a little bit part of the problem.
The funny thing about ethics is there is no absolute, which makes some people uncomfortable. Is it ethical to slice someone with a knife? Does it depend if you're a surgeon or not?
Absolutism + reductionism leads to this kind of nonsense. It is possible that people can disagree about (re)use of culture, including music and print. Therefore it is possible for nuance and context to matter.
Life is a lot easier if you subscribe to a "anyone who disagrees with me on any topic must have no ethics whatsoever and is a BAD person." But it's really not an especially mature worldview.
Categorical imperative and Golden Rule, or as you may know it from game theory "tit-for-tat" says "hi". The beautiful thing about ethics is that we philosophers intentionally teach it descriptively, but encourage one to choose their own based on context invariance. What this does is create an effective litmus test for detecting shitty people/behavior. You grasping on for dear life to "there's no absolutes" is an act of self-soothing on your own part as you're trying to rationalize your own behavior to provide an ego crumple zone. I, on the other hand, don't intend to leave you that option. That you're having to do it is a Neon sign of your own unethicality in this matter. We get to have nice things when people moderate themselves (we tolerate eventual free access to everything as long as the people who don't want to pay for it don't go and try to replace us economically at scale). When people abuse that, (scrape the Internet, try to sell work product in a way that jeopardizes the environment we create in) the nice thing starts going away, and you've made the world worse.
Welcome to life bucko. Stop being a shitty person and get with the program so we have something to leave behind that has a chance of not making us villains in the eyes of those we eventually leave behind. The trick is doing things the harder way because it's the right way to do it. Not doing it the wrong way because you're pretty sure you can get away with it.
But you're already ethically compromised, so I don't really expect this to do any good except to maybe make the part of you you pointedly ignore start to stir assuming you haven't completely given yourself up to a life of ne'er-do-wellry. Enjoy the enantidromia. Failing that, karma's a bitch.
Whenever I see someone on HN preaching about how it's all dog-eat-dog and zero-sum, I imagine them being lonely.
No real friends, no trusted life partner, no kids, no unconditional love. Alone.
Just another soul traveling on an infinite road with lots of signs that point to "happiness," planted there by fellow travelers, never reaching their destination.
Capitalism is always underpinned by a strong legal system which is why most criticism is about constraining growth in legislation, not killing off interference outright. Copyright law is a good example of a law that made sense in it's original form but turned into a monster with scope-creep.
Although, if we're being realpolitik, every time government interference grows in scope and corrupts markets, capitalism still gets blamed and people call for more government to fix it (see: housing). So the capitalism vs state capitalism distinction isn't very meaningful in practice.
I watched a talk from Bjarne Stroustrup at CppCon about safety and it was pretty second hand embarrassing watching him try to pretend C++ has always been safe and safety mattered all along to them before Rust came along.
Well, there has been a long campaign against manual memory management - well before Rust was a thing. And along with that, a push for less use of raw pointers, less index loops etc. - all measures which, when adopted, reduce memory safety hazards significantly. Following the Core Guideliness also helps, as does using span's. Compiler warnings has improved, as has static analysis, also in a long process preceding Rust.
Of course, this is not completely guaranteed safety - but safety has certainly mattered.
Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?
You're "moving the goal posts" of this thread. Safety has mattered - in C++ and in other languages as well, e.g. with MISRA C.
As for the Core Guidelines - most of them are not about safety; and - they are not to be memorized, but a resource to consult when relevant, and something to base static analysis on.
Because they don’t work with it. It’s a simple as that. I don’t trust people who don’t work with a terminal these days, the further they get from a terminal, the less grounded their views are. They rely on hearsay and CEO hype. To make matters worse, they say whatever they think will earn them a bonus/promotion, which leads to a cascade of BS down the chain.
I seriously doubt Satya Nadella is sitting down for hours a day to use Copilot to draft detailed documents. He's being fed fantastical stories by his lackeys telling him what he wants to hear.
Petty crime chips away at society by eroding trust, it needs to be punished Singapore style.
reply