Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most likely danger with AI is concentrated power, not that sentient AI will develop a dislike for us and use us as "batteries" like in the Matrix.


The reality is that the CEO/executive class already has developed a dislike for us and is trying to use us as “batteries” like in the Matrix.


Do you know personally some CEO-s? I know a couple and they generally seem less empathic than the general population, so I don't think that like/dislike even applies.

On the other hand, trying to do something "new" is lots of headaches, so emotions are not always a plus. I could make a parallel to doctors: you don't want a doctor to start crying in a middle of an operation because he feels bad for you, but you can't let doctors doing everything that they want - there needs to be some checks on them.


I would say that the parallel is not at all accurate because the relationship between a doctor and a patient undergoing surgery is not the same as the one you and I have with CEOs. And a lot of good doctors have emotions and they use them to influence patient outcomes positively.


Even then, a psychopathic doctor at least has their desired outcomes mostly aligned with the patients.


CEOs (even most VCs) are labor too


Labor competes for compensation, CEOs compete for status (above a certain enterprise size, admittedly). Show me a CEO willingly stepping down to be replaced by generative AI. Jamie Dimon will be so bold to say AI will bring about a 3 day week (because it grabs headlines [1]) but he isn't going to give up the status of running JPMC; it's all he has besides the wealth, which does not appear to be enough. The feeling of importance and exceptionalism is baked into the identity.

[1] https://fortune.com/article/jamie-dimon-jpmorgan-chase-ceo-a...


Spoiler there’s no reason we couldn’t work three days a week now. And 100 might be pushing it, but having life expectancy to 90 as well within our grass today as well. We have just decided not to do that.


The reason we don't have 3 day weeks is because the system rewards revenue, not worker satisfaction.


That's the market's job. Once AI CEOs start outperforming human CEOs, investment will flow to the winners. Give it 5-10 years.

(Has anyone tried an LLM on an in-basket test? [1] That's a basic test for managers.)

[1] https://en.wikipedia.org/wiki/In-basket_test


Not if CEOs use their political power to make it illegal.


Almost everyone is "labor" to some extent. There is always a huge customer or major investor that you are beholden to. If you are independently wealthy then you are the exception.


Bingo


Do they know it?


Until shareholders treat them as such, they will remain in the ruling class


"AI will take over the world".

I hear that. Then I try to use AI for simple code task, writing unit tests for a class, very similar to other unit tests. If fails miserably. Forgets to add an annotation and enters in a death loop of bullshit code generation. Generates test classes that tests failed test classes that test failed test classes and so on. Fascinating to watch. I wonder how much CO2 it generated while frying some Nvidia GPU in an overpriced data center.

AI singularity may happen, but the Mother Brain will be a complete moron anyway.


Regularly trying to use LLMs to debug coding issues has convinced me that we're _nowhere_ close to the kind of AGI some are imagining is right around the corner.


At least Mother Brain will praise your prompt to generate yet another image in the style of Studio Ghibli as proof that your mind is a tour de force in creativity, and only a borderline genius would ask for such a thing.


Sure, but also the METR study showed the rate of change is t doubles every 7 months where t ~= «duration of human time needed to complete a task, such that SOTA AI can complete same with 50% success»: https://arxiv.org/pdf/2503.14499

I don't know how long that exponential will continue for, and I have my suspicions that it stops before week-long tasks, but that's the trend-line we're on.


Only skimmed the paper, but I'm not sure how to think about "length of task" as a metric here.

The cases I'm thinking about are things that could be solved in a few minutes by someone who knows what the issue is and how to use the tools involved. I spent around two days trying to debug one recent issue. A coworker who was a bit more familiar with the library involved figured it out in an hour or two. But in parallel with that, we also asked the library's author, who immediately identified the issue.

I'm not sure how to fit a problem like that into this "duration of human time needed to complete a task" framework.


This is an excellent example of human “context windows” though and it could be the llm could have solved the easy problem with better context engineering. Despite 1M token windows, things still start to get progressively worse after 100k. LLMs would overnight be amazingly better with a reliable 1M window.


What does "better context engineering" mean here? How/why are the existing token windows "unreliable"?


Fair comment.

While I think they're trying to cover that by getting experts to solve problems, it is definitely the case that humans learn much faster than current ML approaches, so "expert in one specific library" != "expert in writing software".


But will it actually get better or will it just get faster and more power efficient at failing to pair parentheses/braces/brackets/quotes?


Read the linked METR study please.

Or watch the Computerphile video summary/author interview, if you prefer: https://m.youtube.com/watch?v=evSFeqTZdqs


Most reasonable AI alarmists are not concerned with sentient AI but an AI attached to the nukes that gets into one of those repeating death loops and fires all the missiles.


In reality, this isn't a very serious threat. Rather, we're concerned about AI as a tool for strengthening totalitarian regimes.


Given that AI couldn't even speak English 6 years ago, do you really think it's going to struggle with unit tests for the next 20 years?

It's well worth looking at https://progress.openai.com/, here's a snippet:

> human: Are you actually conscious under anesthesia?

> GPT-1 (2018): i did n't . " you 're awake .

> GPT-3 (2021): There is no single answer to this question since anesthesia can be administered [...]


the improvements since 2021 are minor at best. ai thus far has been trained to imitate humans by training it on text written by humans. it's unlikely that you will make something as smart as a human by training it to imitate a human. imitation is a lossy process, you lose knowledge of the "why", you only imitate the outcome. to get beyond this state, we'll need a new technique. so far we've used gradient descent to teach an ai to reproduce a function. to teach it new behaviours will probably take evolutionary approaches. this will take orders of magnitude more compute to get to the same point. so yes it could take 20 years.


> Given that AI couldn't even speak English 6 years ago, do you really think it's going to struggle with unit tests for the next 20 years?

Yes.

LLM is a very interesting technology for machines to understand and generate natural language. It is a difficult problem that it sort of solves.

It does not understand things beyond that. Developing software is not simply a natural language problem.


"Just one more prompt, bro", and your problems will be solved.


To me, the greatest threat is information pollution. Primary sources will be diluted so heavily in an ocean of generated trash that you might as well not even bother to look through any of it.


And it imitates all the unimportant bits perfectly (like spelling, grammar, word choice) while failing at the hard to verify important bits (truth, consistency, novelty)


I see that as the death knell for general search engines built to indiscriminately index the entire web. But where that sort of search fails, opportunities open up for focused search and curated search.

Just as human navigators can find the smallest islands out in the open ocean, human curators can find the best information sources without getting overwhelmed by generated trash. Of course, fully manual curation is always going to struggle to deal with the volumes of information out there. However, I think there is a middle ground for assisted or augmented curation which exploits the idea that a high quality site tends to link to other high quality sites.

One thing I'd love is to be able to easily search all the sites in a folder full of bookmarks I've made. I've looked into it and it's a pretty dire situation. I'm not interested in uploading my bookmarks to a service. Why can't my own computer crawl those sites and index them for me? It's not exactly a huge list.


It’s already been happening but now it’s accelerated beyond belief. I saw a video about how WW1 reenactment photos end up getting reposted away from their original context and confused with original photos to the point it’s impossible to tell unless you can track it back to the source.

Now most of the photos online are just AI generated.


I agree.

Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.

AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.


Concentrated power is kinda a pre-requisite for anything bad happening, so yes, it's more likely in exactly the same way that given this:

  Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
"Linda is a bank teller" is strictly more likely than "Linda is a bank teller and is active in the feminist movement" — all you have is P(a)>P(a&b), not what the probability of either statement is.


Why does an AI need the ability to "dislike" to calculate that its goals are best accomplished without any living humans around to interfere? Superintelligence doesn't need emotions or consciousness to be dangerous.


It needs to optimize for something. Like/dislike is an anthropomorphization of the concept.


It's an unhelpful one because it implies the danger is somehow the result of irrational or impulsive thought, and making the AI smarter will avoid it.


That's not how I read it.

Perhaps because most of the smartest people I know are regularly irrational or impulsive :)


I think most people don't get that; look at how often even Star Trek script writers write Straw Vulcans*.

* https://tvtropes.org/pmwiki/pmwiki.php/Main/StrawVulcan


The power concentration is already massive, and a huge problem indeed. The ai is just a cherry on top. The ai is not the problem.


Seems like a self fulfilling prophecy


Definitely not ‘self’ fulfilling. There are plenty of people actively and vigorously working to fulfill that particular reality.


I'm not so sure it will be that either, it would be having multiple AIs essentially at war with each other over access to GPUs/energy or whatever the materials are needed to grow if/when that happens. We will end up as pawns in this conflict.


Given that even fairly mediocre human intelligences can run countries into the ground and avoid being thrown out in the process, it's certainly possible for an AI to be in the intelligence range where it's smart enough to win vs humans but also dumb enough to turn us into pawns rather just go to space and blot out the sun with a Dyson swarm made from the planet Mercury.

But don't count on it.

I mean, apart from anything else, that's still a bad outcome.


You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?


Given that they both seem pretty bad, it seems wrong to not consider them both dangerous and make plans for both of them?


> power resides where men believe it resides

And also where people believe that others believe it resides. Etc...

If we can find new ways to collectively renegotiate where we think power should reside we can break the cycle.

But we only have time to do this until people aren't a significant power factor anymore. But that's still quite some time away.


For one thing, we'd make shit batteries.


IIRC the original idea was that the machines used our brain capacity as a distributed array but then they decided batteries was easier to understand while been sillier, just burn the carbon they are feeding us, it’s more efficient.


If I could write the matrix reverted, Neo would discover that the last people put themselves in the pods because the world was so fucked up, and the machines had been caretakers that were trying to protect them from themselves. That revision would make the first movie perfect.


Given that the first Matrix was a paradise that's pretty much canon if you ignore the duracell.


They farm you for attention, not electricity. Attention (engagement time) is how they quantify "quality" so that it can be gamed with an algorithm.


The Matrix only had people being batteries because a movie without humans in it isn't a fun movie to watch.


Sounds about right, most of us already are. But why would the AI need our shit? Surely it wants electricity?


I mean, you can't really disprove either being an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: