Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Connectionism is not a predictive theory. Rather it is the manifestation of a depressingly common fallacy in science: assigning a sacred mystery[1] to as-yet unexplained phenomenon.

How does your connectionist interconnected networks of simple units actually give rise to general AI? Answer that and you'll have the "shared language of representation" that the OP was talking about.

[1] http://lesswrong.com/lw/iv/the_futility_of_emergence/



I completely agree that "super magic emergent intelligence" is not an explanation, but a mystery. But I think it's worth noting that the same applies for this "super magic universal language of representation" -- it's not an explanation, it's a mystery.

It's also important to realize that these aren't beliefs, or truth claims, or scientific claims. They're philosophical perspectives; no more, no less. They might guide the intuition, but have no bearing on the science itself. Someone who doesn't clearly understand the distinction between the philosophy and science of a topic may definitely risk either contributing to the "depressingly common fallacy in science" you mention, or risk misinterpreting a philosophical argument for a scientific one, and hence through blurred vision believe they see fallacy when in fact there is none.

One way of looking at it is both views are different philosophical angles of the same thing (or at least the same problem/mystery). A connectionist sees this conception of a "shared language of representation" as assigning a sacred mystery (what does this language actually consists of, precisely?) to an as-yet unexplained phenomenon, in the same way a computationalist sees this of the connectionist's learning algorithm (how does this learning algorithm work, precisely?)

The reason I highlight this philosophical symmetry is to emphasize that these are merely different intuitive mindsets developed towards approaching the common mystery of general intelligence.

The bottom line is so long as human-like "general intelligence" is a mystery (to the extent that we can't replicate it 100%+ effectively in computers), it's going to be an "unexplained phenomenon", and thus any theories developed around it will have some "magic" hole somewhere -- some key element devoid of predictive power. (Because if there were no such hole, then by definition, we'd already have it all figured out.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: