Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't disagree with any of your examples, but I would interpret them differently. There is certainly a fair amount of "extrapolating" going on subconsciously. Our brains attempt to extract higher level meaning from sensory input (such as rotation or relative size of objects). This is a sort of knowledge that is based on the totality of sensory input received up until that point (i.e. the experience that a silhouette is likely a 3D object that is spinning in a particular direction). But I don't consider this knowledge as being distinct from the sensory input itself, rather an abstraction over a set of similar inputs that give it meaning.

Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close. The stuff of mental computation to an extent is our memories of sensory inputs themselves, or abstractions over similar classes of inputs.

Thinking about it further, our ideas may not be as far apart as they seem.

Do you consider the "sensory input" as, say, the light waves hitting the retina, or the set of neural states triggered that induces a "qualia" experience of sight? In my explanation I was considering the qualia as the sensory input rather than the frequencies of light. Perhaps you're using the other definition?



> Do you consider the "sensory input" as, say, the light waves hitting the retina, or the set of neural states triggered that induces a "qualia" experience of sight?

I consider sensory input everything from retina up to a point when you become aware that the horse just passed you.

I think that only this high level information gets stored and is used for all intellectual activity. Actual sight, sound and smell of a horse is just stored to the extend that allows to recognize horses better in the future but it's not the part of any reasoning you might have later of why the horse was there, where was it going and whether it would be cool to own a horse. You use abstract representation of a horse for all those thoughts.

> Personally, when I imagine a horse, I don't imagine some abstraction of a horse. My subconscious minds pieces together chunks of images from my experiences with horse-images and puts together something reasonably close.

You feel that but if you tried to draw or sculpt a horse you'd see how many pieces you thought you recalled you actually made up or have no idea of how they really look. If I'm not mistaken you admit that the horse you try to imagine gets rebuilt from bits and pieces that are stitched together. In my opinion foundation of that construct is that internal abstract representation of a horse concept.

> (i.e. the experience that a silhouette is likely a 3D object that is spinning in a particular direction)

In my opinion brain doesn't switch between spin right, spin left, but between, this person is slightly above me, this person slightly is below me. Change in the direction of rotation is just what tells you very clearly that your brain just switched. Not only perception of a 3d object changes but the whole scene, relation between observer and the object.


>If I'm not mistaken you admit that the horse you try to imagine gets rebuilt from bits and pieces that are stitched together. In my opinion foundation of that construct is that internal abstract representation of a horse concept.

The way I imagine this works is that our sensory input fires some particular set of neurons which accounts for our sensory experience of the horse. When we recall a mental image of a particular horse, our brain attempts to recreate as best it can the neural firing pattern from the actual sensory input. Of course, this pattern gets distorted as we do not remember specific images as a whole (unless one has a photographic memory), but pieces of images that represent certain abstractions over portions of a subject. These patterns are recreated by firing certain "bootstrap" neurons (memories units) that downstream cause the recreated pattern.

Expanding on this further, I can imagine our image-storage system being something like a many-dimensional quadtree, except instead of just spacial dimensions it also extracts colors, shapes, patterns, textures, etc. So different meaningful concepts are stored in different layers of the neural network, and some approximation to the original can be recreated on demand. This can certainly be considered an abstract representation, yet it is still tied to and semantically similar to a raw 2d mapping of the image. The difference is mainly storage efficiency due to compressing similar concepts learned from our experiences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: