Hearing is kind of fascinating if you calculate the power supplied to the eardrum in the ~0 dB range, we're talking about attowatts of sound power generating movements in the single-digit nanometer range, which is still picked up and converted to a neuronal response. Meanwhile the whole thing suppresses the far greater pressure fluctuations due to your heartbeat and other movements quite well, and can reasonably cope with a ~120 dB dynamic range (though this is damaging fairly quickly).
Pro DJ here... Agree with the rest of your comment except for... "Where vision shines is distinguishing where the signal comes from."
Suggest closing your eyes and having someone else snap their fingers around you, you'll know immediately where that sound came from. This is accomplished by your mind differentiating the slight time difference between when your left and right ear register the noise as well as the volume difference.
This innate ability can be developed by DJ's to take two songs playing at different speeds and speed or slow them down to match. This is called "beat matching" and before software made this more accessible (aka more automated) it was the key skill necessary for DJ'ing as it allows seamless transitions between songs.
I still find your comment insightful and interesting, I just had to nitpick that one part.
They are orthogonal. Hearing has phase and continuous wideband frequency information but only two channels. Eyesight has millions of channels but discretized narrowband frequency and no phase information.
A many-channel sense with full frequency, phase, polarization, ... information would presumably be very hard to process usefully.
If so, then it's even more stunning that the communication channel to the auditory cortex is quiet enough to carry a massive disturbance + a tiny signal, and still preserve that signal after subtracting out the disturbance.
That would be very difficult to accomplish electronically.
The brain doesn't compute it, the cochlear is an fft! It is a cone shaped structure (that is curled up) that is covered in hairs and neurons. The fatter end is used for low freq and the thinner end is used for hi freq. So the brain takes audio in the frequency domain, not as continuous samples!
(I did an Anatomy degree almost 20years ago now, all from memory)
The person you are responding to is right and your assertion is incorrect. The person you replied to knows the ear delivers a frequency domain representation. They were disagreeing with the FFT part.
The FFT is a specific mathematical construction that carries out a Fourier Transform efficiently through a hierarchy of "butterfly" steps. The ear has no such thing. It is just ~20,000 hair cells, each resonant to a specific frequency. That is, each computation is local, very unlike the FFT.
I can roughly recall the mechanism by which it worked, and was told that this was an analog for how the Fourier transformation worked, and I vaguely remember going through one on paper. The spiral of the cochlear seems an analog of how the harmonium works. Does the earlier poster have a point with regard to the cochlear as a physical object performing a Fourier transform (though perhaps not a fast one)?
Fourier transform is different from FFT, which is Fast Fourier Transform. FFT is an optimization that involves decomposing a signal into multiple "smaller" signals, recursively performing FFT on those, and then combining the results.
The accurate way of saying this is that the cochlear does Fourier analysis.
Folks around here don't handle ambiguity well, and are treating your comment like it's wrong, rather than basically correct except for a harmless conflation of an algorithm for something with the thing itself.
No, that's also not accurate. Words mean things. We know that parts of the cochlea respond to different frequencies, but that does remotely imply that the auditory system is doing Fourier analysis, which is a high level mathematical transform with a specific set of formulations. "Fourier Analysis" does not describe every possible method of extracting frequency content from a signal.
> Fourier analysis of discharge patterns in response to sinusoidal acoustic stimulation provides a consistent and repeatable measure of response phase and amplitude. The Journal of the Acoustical Society of America 58, 867 (1975); https://doi.org/10.1121/1.380735
> sinusoidal frequency domain decomposition of sound waves is a key mechanical phenomenon exploited by our hearing system, leading to in effect a frequency domain transformation of the temporal pattern of compressions and rarefactions that we term sound.https://uncommondescent.com/video/hearing-the-cochlea-the-fr...
That one has a video.
Your case is roughly as incoherent as one which claims that a thrown ball does not perform Newtonian physics. It doesn't have to.
Now that we've disposed of the nonsense that cochlear response is not meaningfully modeled with Fourier analysis, interested parties might have fun with research into all the ways this model is not perfect. I've got a paper loaded in my reader claiming it's actually wavelet analysis, for when I have time and inclination to read it.