Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Every deep learning system has tacit knowledge: it knows a chair when it sees one but can't explain how it knows.

A traditional computer program that can find the derivative of sin(x) also can not explain how it knows.



> A traditional computer program that can find the derivative of sin(x) also can not explain how it knows.

Oh but it could, and that's the point. Some computer differentiation techniques just follow the same rules you learned when you took calculus. They typically don't show you which rules they followed, but they easily could. Other differentiation techniques are more exotic but there's no reason they couldn't show you the chain of computations and/or deductions they went through to arrive at that derivative. Such programs can easily justify their results and even teach humans calculus if configured properly.

Contrast that with the chair example. It is impossible right now to write a program that can show a human the chain of reasoning it went through to decide some image is a chair, because no such chain exists. There's a giant iterated polynomial with nonlinear threshold functions and a million coefficients, but there's no chain of reasoning.


I'm not sure a human can explain how they know a chair is a chair, either. They can come up with a post-hoc rationalisation, but that's not guaranteed to really represent the decision-making process they went through.

At best you get an answer that describes one or more conscious decisions and leaves the unconscious decisions out, such as "it looks a lot like a stool because it's low to the ground and has three legs, but it has a back, so I think it's a chair"; when the real answer is that they have a bunch of pattern-matching visual neurons, and those neurons feed into other neurons that detect more complicated patterns, and the concept of a chair eventually emerges.


That lack of a chain of reasoning just doesn’t feel any more significant to me than the fact that a human can also not endlessly regress upwards explaining every bit of knowledge they have or every reason they made a decision. Likewise the computer algebra software can only answer “how did you know that?” so many times in a sequence.


People can't explain how they know something either. They know it has something to do with their brains, but they don't know how exactly the mechanism works.

At a certain level, "knowledge" is baked into the execution hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: