Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your brain doesn’t make a 2 dimensional image based entirely on photons entering your eye. You generate a complex physical model of your surroundings based only partially on visual input and rely substantially on memory.


Also other senses, including proprioception. In a completely dark environment, you could swear that you see your hand waving in front of your face. That's because your brain actually does know it's there, and it's trying to create a unified model.


Kind of like this well trained CNN is no longer relying entirely on the raw pixel values, but is statistically inferring a brighter image from the baseline.


There's a difference between applying known priors, and making things up based on statistics. Conflating the two isn't helping anyone.


Not to harp on this, but the point is that, as I understand it, both “systems” are using exogenous information to extrapolate more data than is actually present in the source image.

That’s not to say that the same “thing” is happening at the granular level at all.

But this is distinctly different from standard filtering functions, which can only work with entropy already present in the source image. So there’s a neat distinction.

The output from the CNN is essentially an “artist interpretation” of the source image. As such there could be “clarifying details” in the output that were in fact totally invented and not actually present in the source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: