Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The response is a bit knee-jerky and emotional. A lot of people find it upsetting that human thinking can be approximated by AI.

The demands for proof stem from this emotional response and a misunderstanding of how science actually works. There is plenty of evidence and theories. Most of that evidence is of an empirical nature. It suggests that GPT 4 can do some interesting things and that artificial neurons cluster and fire in ways similar to those in real brains. That's a theory that is backed by some of this evidence. And it is of course not completely accidental because that sort of was the intention by those that constructed the neural networks. You could say that neural networks apparently work as intended.

The scientific way to dismiss that theory would be proving it wrong. That's what science does: gather evidence and facts, come up with theories that explain those, and then try to find proof that counters those theories and explanations. And then you replace them with better ones. Falsifying theories is how science move forward. You don't prove them right but you fail to prove them wrong. Insisting something is false without doing that is very unscientific.

Human brains are more than just neurons of course. The article actually calls that out. There's a lot of chemistry in our brains that directly controls what it does. That's why people enjoy taking certain drugs; those literally change the way our brain operates. Coffee is a drug that many people find useful. Learning especially is associated with endorphins. You get a little endorphin rush when you figure something out. Some people like this so much that they become scientists.

Artificial neural networks don't really model any of that. But nobody is saying that they are the same; just that they do similar things in similar ways; as can be observed via experiments and the use of MRI scanners. We don't really understand why that is but the similarity is easily observed and a valid theory is that an ANN captures enough of the complexity of a brain to be able to do interesting things. Which is of course backed up by plenty of empirical evidence in the form of people having used GOT-4. People seem to struggle to articulate what it is that is missing exactly that would prove that theory wrong. Lots of people that want that to be wrong, not a lot coming up with better theories. But of course some scientists are working on that and the prospect of them figuring this out is what truly scares some people.



> The scientific way to dismiss that theory would be proving it wrong.

That's completely the opposite of what is supposed to happen: You are supposed to provide evidence that it is right, not assume it is right until proven wrong.

> just that they do similar things in similar ways

Completely baseless. Neural nets can be trained on all kinds of phenomenon, the weather for example. No one says that the weather works like a neural net. For the same reason there is no reason to say (without specific evidence) that the brain works like a neural net despite producing the same output.

> People seem to struggle to articulate what it is that is missing exactly that would prove that theory wrong.

Again there is no evidence that it is right. When there is no evidence that it is right there is no burden to show that it is wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: