Hacker Newsnew | past | comments | ask | show | jobs | submit | jstanley's commentslogin

Well then buying them directly from Google would have no effect either.

Except that Google would then get the profits

It's not about Google, it's about OP's personal values


But if you think buying on the secondhand market doesn't impact the market, why do you think buying from the OEM does?

It's one phone's worth of demand either way.


Nobody is buying pixels specifically to resells them. If anything there fast reduction in value makes them less attractive.

First hand = money goes directly to Google including margin

Second hand = money only goes towards a private person, 0$ for google. At best it prevents usable phones being thrown into landfill.


> If anything there fast reduction in value makes them less attractive.

Right. And if you buy a secondhand one you are increasing their value on the secondhand market. Reducing the depreciation increases the value of the brand new phone.



Did I miss it or is there no link to try this out?

EDIT: (Also, watching the video... "How do you find out if someone worked at Google?" "Don't worry, they'll let you know")


A lot of AI twitter is just this:

Create a vibe-coded demo -> showcase it with a faked/overblown video and an "It's not X It's Y. Read thread!!11!" type engagement bait -> Get LLM comments all riled up and excited in the comments to fake hype -> sell a course/LLM wrapper.


We do sort of have that with the capabilities stuff (although I admit hardly anyone knows how to use it).

But the tricky part is that "reading files" is done all the time in ways you might not think of as "reading files". For example loading dynamic libraries involves reading files. Making network connections involves reading files (resolv.conf, hosts). Formatting text for a specific locale involves reading files. Working out the timezone involves reading files.

Even just echoing "hello" to the terminal involves reading files:

  $ strace echo hello 2>&1 | grep ^open
  openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libselinux.so.1", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libgcc_s.so.1", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libm.so.6", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libpcre2-8.so.0", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/proc/filesystems", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/proc/self/maps", O_RDONLY|O_CLOEXEC) = 3
  openat(AT_FDCWD, "/usr/lib/cargo/bin/coreutils/echo/en-US.ftl", O_RDONLY|O_CLOEXEC) = -1 ENOTDIR (Not a directory)

Capabilities are craaaazy coarse on Linux. Really only a small piece of the sandboxing puzzle. Flatpak, Bubblewrap, and Firejail each provide an overall fuller view of what sandboxing can be.

OP says "restricted access to files". Read access to your home directory is not required for loading dynamic libraries or printing the time.

> every single extension provides 100% access to my websites to whoever controls the extension.

But the browser also has 100% access to all of the websites. The browser is software that works for you. You control the browser.

Who but yourself do you imagine controls your extensions?


> The browser is software that works for you. You control the browser.

Oh really? Then why do my browsers keep moving things?


He's counting out like 6 at a time. He needs a fast way to pick small quantities precisely, not a fast way to check large quantities. Once they're picked they're easily verified by eye.

I don't think your example refutes the fairness heuristic at all.

The first case is completely fair because anybody else could have done the same thing without any special access required.

The second case is unfair because you had to work at the company to get access.


Okay, then imagine you overhear at a bar. Yes “anyone could have” theoretically, but not actually. In either case, you have material non-public information that your counterparty in the market does not.

you got that piece of non public information was not because you are an insider. As long as the bar is not exclusive to insider, i don't see any difference

Isn't it exclusive to people who live in the area of the bar?

What if the bar has a cover charge, so only those who pay get in?

What if the cover charge is $10,000 and the bar is advertised as "the place where public company execs love to come talk to each other about private deals"?


How should he have acted instead?

Yeah.

“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”


We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.

Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.

He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.

Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.

Oh, also the stealing. All the stealing. But he is not alone there by any means.

edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.


> to promote his product with the silent implication that LLMs actually ARE a path to AGI

That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.

Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?


His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.

The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.

> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]

... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.


Because the people who are consistently right will consistently win money and will make bigger bets which move the price more, in the limit case making the price converge on the true probability of the outcome.

This is the theoretical underpinning of prediction markets.


Equating being "consistently right" with having a sufficiently large stash of capital is ludicrous.

"right" people will wisely take most their winnings out of a high-variance market. "wrong" people with deep pockets (or lots of wrong people with shallow pockets) will continue to distort the market.


> will continue to distort the market.

they can only do so as long as they have enough capital to lose. Because every time they try to move the betting markets against the truth, they will simply lose that money when the event happens (and turns out they were wrong).

So any distortion will merely be temporary. Unless they have access to unlimited capital of course - which is not true yet for anyone (but the US gov't).


That only makes sense in a hermetically sealed system, which this is very much not.

Yes, but is this a problem? Haven't most betting markets turned out to offer accurate predictions?

Not particularly so. But even if it were, would that justify the social cost of this kind of gambling?

Well, the more often you're right, the more capital you will be able to accrue to bet with next time.

This is an unnecessarily cynical view.

People are excited about AI because it's new powerful technology. They aren't "pandering" to anyone.


I have been in dozens of meetings over the past year where directors have told me to use AI to enable us to fire 100% of our contract staff.

I have been in meetings where my director has said that AI will enable us to shrink the team by 50%.

Every single one of my friends who do knowledge work has been told that AI is likely to make their job obsolete in the next few years, often by their bosses.

We have mortgages to pay and children to feed.


People are afraid because they need to work to eat. People who don't need to work to eat are less likely to be afraid.

I have yet to meet anyone except managers be excited about LLM's or generative AI.

And the only people actually excited about the useful kinds of "AI", traditional machine learning, are researchers.


You don' have to look past this very forum, most people here seem to be very positive about gen AI, when it comes to software development specifically.

Lots of folk here will happily tell you about how LLMs made them 10x more productive, and then their custom agent orchestrator made them 20x more productive on top of that (stacking multiplicatively of course, for a total of 200x productivity gain).


I assume those people are managers, have a vested interest in AI, or have only just started programming.

How would you find out if you were wrong?

You're presented with hundreds of people that prove you wrong, and your response is "no, I assume I'm right"?


This is obviously a rhetorical statement. I'm not claiming a categorical fact, but a fuzzy one.

Most of these peoples are managers, investors, or junior.


I don't know what is your bubble, but I'm a regular programmer and I'm absolutely excited even if a little uncomfortable. I know a lot of people who are the same.

Interesting, every developer I've spoken to is extremely skeptical and has not found any actual productivity boosts.

Ok that's not true. I know one junior who is very excited, but considering his regular code quality I would not put much weight on his opinion.


I am using AI a lot to do tasks that just would not get done because they would take too long. Also, getting it to iterate on a React web application meant I can think about what I want it to do rather than worry about all the typing I would have to do. Especially powerful when moving things around, hand-written code has a "mental load" to move that telling an AI to do it does not. Obviously not everything is 100% but this is the most productive I have felt for a very long time. And I've been in the game for 25 years.

Why do you need to move things around? And how is that difficult?

Surely you have an LSP in your editor and are able to use sed? I've never had moving files take more than fifteen minutes (for really big changes), and even then most of the time is spent thinking about where to move things.

LLM's have been reported to specifically make you "feel" productive without actually increasing your productivity.


I mean there are two different things. One is whether there are actual productivity boosts right now. And the second is the excitement about the technology.

I am definitely more productive. A lot of this productivity is wasted on stuff I probably shouldn't be writing anyways. But since using coding agent, I'm both more productive at my day job and I'm building so many small hobby projects that I would have never found time for otherwise.

But the main topic of discussion in this thread is the excitement about technology. And I have a bit mixed feelings, because on one hand side I feel like a turkey being excited for the Thanksgiving. On the other hand, I think the programming future is bright. there will be so much more software build and for a lot of that you will still need programmers.

My excitement comes from the fact that I can do so much more things that I wouldn't even think about being able to do a few months ago.

Just as an example, in last month I have used the agents to add features to the applications I'm using daily. Text editor, podcast application, Android keyboard. The agents were capable to fork, build, and implement a feature I asked for in a project where I have no idea about the technology. Iif I were hired to do those features, I would be happy if I implemented them after two weeks on the job. With an agent, I get tailor made features in half of a morning. Spending less than ten minutes prompting.

I am building educational games for my kids. They learn a new topic at school? Let me quickly vibe the game to make learning it fun. A project that wouldn't be worth my weekend, but is worth 15 minutes. https://kuboble.com/math/games/snake/index.html?mode=multipl...

So I'm excited because I think coding agents will be for coding what pencil and paper were for writing.


I don't understand the idea that you "could not think about implementing a feature".

I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer. Especially for the kinds of applications you mention.

Also it sounds less like you're productive and more like the vibeslop projects are distracting you.


I'm claiming it's both.

I produce more good (imo) production features despite being distracted.

The features I mention is something that I would be able to do, but only with a lot of learning and great effort - so in practical terms I would not.

It is probably a skill issue but in the past many times I downloaded the open source project and just couldn't build and run it. Cryptic build errors, figuring out dependencies. And I see claude gets the same errors but he just knows how to work around those errors. Setting up local development environment (db, dummy auth, dummy data) for a project outside of my competence area is already more work than I'm willing to do for a simple feature. Now it's free.

>I can think of roughly 0 fratures of run-of-the-mill software that would be impossible to implement for a semi-competent software developer.

Yes. I'm my area of competence it can do the coding tasks I know exactly how to do just a bit faster. Right now for those tasks I'd say it can one shot code that would take me a day.

But it enables me to do things in the area where I don't have expertise. And getting this expertise is very expensive.


Out of interest, could you give me an example of a feature that it one-shotted that would have taken you a whole day?

The example from yesterday:

I have a large C# application. In this application I have a functionality to convert some group of settings into a tree model (a list of commands to generate this tree). There are a lot of weird settings and special cases.

I asked claude to extract this logic into a separate python module.

It succesfully one-shot that, and I would estimate it as 2 days work for me (and I wrote the original C# code).

This is probabaly the best possible kind of task for the coding agents, given that it's very well defined task with already existing testcases.


Seems reasonable, but if it's just copy pasting, doesn't seem like that would take you a whole day. Maybe on the order of an hour at most.

Were you exaggerating earlier or do you have more examples?


This is a two-day task for me. If you could do it in one hour, then you're a 10x programmer compared to me.

You can browse the code at <my_username>.com/slop/hn_tb/

I have also sloped the simple code viewer. So you can make your judgement if it looks like 1 hour task.


This is nothing to do with politicians.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: