Do you know personally some CEO-s? I know a couple and they generally seem less empathic than the general population, so I don't think that like/dislike even applies.
On the other hand, trying to do something "new" is lots of headaches, so emotions are not always a plus. I could make a parallel to doctors: you don't want a doctor to start crying in a middle of an operation because he feels bad for you, but you can't let doctors doing everything that they want - there needs to be some checks on them.
I would say that the parallel is not at all accurate because the relationship between a doctor and a patient undergoing surgery is not the same as the one you and I have with CEOs. And a lot of good doctors have emotions and they use them to influence patient outcomes positively.
Labor competes for compensation, CEOs compete for status (above a certain enterprise size, admittedly). Show me a CEO willingly stepping down to be replaced by generative AI. Jamie Dimon will be so bold to say AI will bring about a 3 day week (because it grabs headlines [1]) but he isn't going to give up the status of running JPMC; it's all he has besides the wealth, which does not appear to be enough. The feeling of importance and exceptionalism is baked into the identity.
Spoiler there’s no reason we couldn’t work three days a week now. And 100 might be pushing it, but having life expectancy to 90 as well within our grass today as well. We have just decided not to do that.
Almost everyone is "labor" to some extent. There is always a huge customer or major investor that you are beholden to. If you are independently wealthy then you are the exception.
I hear that. Then I try to use AI for simple code task, writing unit tests for a class, very similar to other unit tests. If fails miserably. Forgets to add an annotation and enters in a death loop of bullshit code generation. Generates test classes that tests failed test classes that test failed test classes and so on. Fascinating to watch. I wonder how much CO2 it generated while frying some Nvidia GPU in an overpriced data center.
AI singularity may happen, but the Mother Brain will be a complete moron anyway.
Regularly trying to use LLMs to debug coding issues has convinced me that we're _nowhere_ close to the kind of AGI some are imagining is right around the corner.
At least Mother Brain will praise your prompt to generate yet another image in the style of Studio Ghibli as proof that your mind is a tour de force in creativity, and only a borderline genius would ask for such a thing.
Sure, but also the METR study showed the rate of change is t doubles every 7 months where t ~= «duration of human time needed to complete a task, such that SOTA AI can complete same with 50% success»: https://arxiv.org/pdf/2503.14499
I don't know how long that exponential will continue for, and I have my suspicions that it stops before week-long tasks, but that's the trend-line we're on.
Only skimmed the paper, but I'm not sure how to think about "length of task" as a metric here.
The cases I'm thinking about are things that could be solved in a few minutes by someone who knows what the issue is and how to use the tools involved. I spent around two days trying to debug one recent issue. A coworker who was a bit more familiar with the library involved figured it out in an hour or two. But in parallel with that, we also asked the library's author, who immediately identified the issue.
I'm not sure how to fit a problem like that into this "duration of human time needed to complete a task" framework.
This is an excellent example of human “context windows” though and it could be the llm could have solved the easy problem with better context engineering. Despite 1M token windows, things still start to get progressively worse after 100k. LLMs would overnight be amazingly better with a reliable 1M window.
While I think they're trying to cover that by getting experts to solve problems, it is definitely the case that humans learn much faster than current ML approaches, so "expert in one specific library" != "expert in writing software".
Most reasonable AI alarmists are not concerned with sentient AI but an AI attached to the nukes that gets into one of those repeating death loops and fires all the missiles.
the improvements since 2021 are minor at best. ai thus far has been trained to imitate humans by training it on text written by humans. it's unlikely that you will make something as smart as a human by training it to imitate a human. imitation is a lossy process, you lose knowledge of the "why", you only imitate the outcome. to get beyond this state, we'll need a new technique. so far we've used gradient descent to teach an ai to reproduce a function. to teach it new behaviours will probably take evolutionary approaches. this will take orders of magnitude more compute to get to the same point. so yes it could take 20 years.
To me, the greatest threat is information pollution. Primary sources will be diluted so heavily in an ocean of generated trash that you might as well not even bother to look through any of it.
And it imitates all the unimportant bits perfectly (like spelling, grammar, word choice) while failing at the hard to verify important bits (truth, consistency, novelty)
I see that as the death knell for general search engines built to indiscriminately index the entire web. But where that sort of search fails, opportunities open up for focused search and curated search.
Just as human navigators can find the smallest islands out in the open ocean, human curators can find the best information sources without getting overwhelmed by generated trash. Of course, fully manual curation is always going to struggle to deal with the volumes of information out there. However, I think there is a middle ground for assisted or augmented curation which exploits the idea that a high quality site tends to link to other high quality sites.
One thing I'd love is to be able to easily search all the sites in a folder full of bookmarks I've made. I've looked into it and it's a pretty dire situation. I'm not interested in uploading my bookmarks to a service. Why can't my own computer crawl those sites and index them for me? It's not exactly a huge list.
It’s already been happening but now it’s accelerated beyond belief. I saw a video about how WW1 reenactment photos end up getting reposted away from their original context and confused with original photos to the point it’s impossible to tell unless you can track it back to the source.
Now most of the photos online are just AI generated.
Our best technology at current require teams of people to operate and entire legions to maintain. This leads to a sort of balance, one single person can never go too far down any path on their own unless they convince others to join/follow them. That doesn't make this a perfect guard, we've seen it go horribly wrong in the past, but, at least in theory, this provides a dampening factor. It requires a relatively large group to go far along any path, towards good or evil.
AI reduces this. How greatly it reduces this, if it reduces it to only a handful, to a single person, or even to 0 people (putting itself in charge), seems to not change the danger of this reduction.
Concentrated power is kinda a pre-requisite for anything bad happening, so yes, it's more likely in exactly the same way that given this:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
"Linda is a bank teller" is strictly more likely than "Linda is a bank teller and is active in the feminist movement" — all you have is P(a)>P(a&b), not what the probability of either statement is.
Why does an AI need the ability to "dislike" to calculate that its goals are best accomplished without any living humans around to interfere? Superintelligence doesn't need emotions or consciousness to be dangerous.
I'm not so sure it will be that either, it would be having multiple AIs essentially at war with each other over access to GPUs/energy or whatever the materials are needed to grow if/when that happens. We will end up as pawns in this conflict.
Given that even fairly mediocre human intelligences can run countries into the ground and avoid being thrown out in the process, it's certainly possible for an AI to be in the intelligence range where it's smart enough to win vs humans but also dumb enough to turn us into pawns rather just go to space and blot out the sun with a Dyson swarm made from the planet Mercury.
But don't count on it.
I mean, apart from anything else, that's still a bad outcome.
You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?
IIRC the original idea was that the machines used our brain capacity as a distributed array but then they decided batteries was easier to understand while been sillier, just burn the carbon they are feeding us, it’s more efficient.
If I could write the matrix reverted, Neo would discover that the last people put themselves in the pods because the world was so fucked up, and the machines had been caretakers that were trying to protect them from themselves. That revision would make the first movie perfect.