Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>there is no simple symbolic path that is human understandable towards human level intelligence

I imagine that, piece by piece, if we really wanted to, we could look at the 175 billion parameters in GPT-3, test which are 'active' when this poem is written, which are active when that imitation of copypasta is active, and through a torturous interrogation, find that one particular parameter, say, is for weighting the 0.001% likelihood that you would use an accented é during a particular rhetorical flourish in certain contexts. Perhaps another parameter codes a meta-meta-meta abstraction about how a meta-meta rule governs a meta-rule for how to use a sometimes-used linguistic rule.

And the totality of those 175 billion parameters could in principle be uncovered and described in ways that are satisfactory to humans. It would be tedious and unproductive, and akin to the project of archeologists patiently, tediously uncovering a dinosaur.

But the point is it would be practically difficult, not something forbidden as a matter of principle.

More importantly though, is that I don't think the supposed incomprehensibility to humans has relevance to anything. What's the argument supposed to be? Humans don't depend in any explicit, conscious way, on having conscious grasp of our own tacit knowledge. I don't know why I unconsciously shift my weight a certain way when going up stairs. This doesn't stop me from walking up stairs. And it doesn't stop us from making machines that could walk up stairs.

We need neural nets? Okay, sure, we need them. But we can run those on machines that, at the end of the day, are silicon and 0s and 1s, which are every bit the brute, formal systems that supposedly can't model intelligent things. Weren't we supposed to have encountered a barrier to what computers can do at some point in this thought exercise? Because it appears that what began as an extremely bold claim, that computers can't do X, Y, and Z, ends in a whimper, as a vague exhortation to appreciate that neural nets are in some sense structurally different from logic design. Nothing about that latter claim is making any bold statements about the limits of what computers can or can't do, which makes me feel like the argument forgot what it was supposed to be about halfway through.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: