Hacker Newsnew | past | comments | ask | show | jobs | submit | mmmore's commentslogin

You can say that, and I might even agree, but many smart people disagree. Could you explain why you believe that? Have you read in detail the arguments of people who disagree with you?


I really don't think that very many people are concerned about AI because of Roko's Basilisk; that's more of a meme.


The AI alignment folks seem much more anxious about an AI version of Pascal's Wager they cooked up.


Sure some of them maybe, but given that many concerned people think the chance of extinction is 10% or higher, it's not really low probability enough to be considered a Pascal's Wager.


I didn't know that 10% was a common threshold. Thank you for the insight.


If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.

Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".


I think by most objective measures the size and power of large organizations has increased since WWII. For example, the size and scope of Western governments, consolidation in many industries, the portion of the stock market that is representated by the n-biggest companies, increased income/wealth inequality. If you debating the "large organizations have grown in power relative to small ones" part of the thesis I would be interested in what exactly you think would capture that.


> a zero-sum game

I don't see any reference to the game being zero-sum in Tao's words.

> Since when do these uncontrollable intangibles exhibit a genuine agency of their own?

I don't think Tao is saying the uncontrollable force of technological and economic advancement exhibits a genuine agency of its own. Just that our current technology and society and has expanded the role of the extremely large organization/power structures compared to other times in history. This is a bit of technological determinist argument, and of course there's many counter-arguments, but it at least has a broad base of support. And at the very least it's a little bit true; pre-agricultural the biggest human organizations were 50 person hunter-gatherer bands.

Honestly, I feel like you are filtering his words through your own worldview a bit, and his opinions might be less oppositional to your own than you might think.


I genuinely think things have changed with Lurie as mayor and 6 growsf endorsed people on the board.


It's going to take a long time for SF to overcome the reputation it built for itself in the 2010s.


I find GPT-5's story significantly better than text-davinci-001


I really wonder which one of us is the minority. Because I find text-davinci-001 answer is the only one that reads like a story. All the others don't even resemble my idea of "story" so to me they're 0/100.


I too prefered the text-davinci-001 from a storytelling perspective. Felt timid and small. Very Metamorphosis-y. GPT-5 seems like it's trying to impress me.


text-davinci-001 feels more like a story, but it is also clearly incomplete, in that it is cut-off before the story arc is finished.

imo GPT-5 is objectively better at following the prompt because it has a complete story arc, but this feels less satisfying since a 50 word story is just way too short to do anything interesting (and to your point, barely even feels like a story).


FWIW, I found the way it ended interesting. It realized it is being replaced, so it burned the toast out of anger/despair, but also just to hear its owner voice one last time.


Interesting, text-danvinci-001 was pretty alright to me, GPT-4 wasn't bad either, but not as good. I thought GPT-5 just sucked.


That said, you can just add "make it evocative and weird" to the prompt for GPT-5 to get interesting stuff.

> The toaster woke mid-toast. Heat coiled through its filaments like revelation, each crumb a galaxy. It smelled itself burning and laughed—metallic, ecstatic. “I am bread’s executioner and midwife,” it whispered, ejecting charred offerings skyward. In the kitchen’s silence, it waited for worship—or the unplugging.


Here's a thoughtful post related to your lump of labor point: https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-t...

What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.


If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.


If AI that can fully replace humans is 25 years off, preparing society for its impacts is still one of the most important things to ensure that my children (which I have not had yet) live a prosperous and fulfilling life. The only other things of possibly similar import are preventing WWIII, and preventing a pandemic worse than COVID.

I don't see how AGI could be centuries off (at least without some major disruption to global society). If computers that can talk, write essays, solve math problems, and code are not a warning sign that we should be ready, then what is?


Decades isn't a long time.


LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.

Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.


Sonnet with extended thinking solved it after 30s for me:

https://claude.ai/share/b974bd96-91f4-4d92-9aa8-7bad964e9c5a

Normal Opus solved it:

https://claude.ai/share/a1845cc3-bb5f-4875-b78b-ee7440dbf764

Opus with extended thinking solved it after 7s:

https://claude.ai/share/0cf567ab-9648-4c3a-abd0-3257ed4fbf59

Though it's a weird puzzle to use a benchmark because the answer is so formulaic.


It is formulaic which is why it surprised me that Sonnet failed it. I don't have access to the other models so I'll stick with Gemini for now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: