Jump to content

Recommended Posts

Posted

In a handstand the world looks upside down.  Why doesn't it sound upside down, too?

 

It does, but your brain compensates. Similar processes are at work when you are able to pick out a conversation in a crowded room.

  • Upvote 1
Posted
I'm not quite sure what you are getting at but I'll have a go.

 

The wavelength of light is tiny compared with the dimensions of the retina so an image can be resolved there with a distinct right, left, top and bottom. The wavelength of sound is a few billion times longer so an equivalent "retina" would have to be correspondingly large. Hearing therefore employs an eardrum instead which integrates the overall waveform and sends a composite signal via hammer, anvil, cochlea...etc. to the brain.

 

Such directionality as we have with hearing is, I believe, done by comparing the relative intensity and arrival time at one ear compared with the other, using high level processing in the brain.

  • Upvote 1
Posted

The image of an eye is two dimensional, and compound with another it becomes three dimensional. The sound of the ear is one dimensional, like a point. You can't turn a point upside down because it is the same on all sides. The experience of surround sound is from mixing two ears and distance.

Posted

The image of an eye is two dimensional, and compound with another it becomes three dimensional. The sound of the ear is one dimensional, like a point. You can't turn a point upside down because it is the same on all sides. The experience of surround sound is from mixing two ears and distance.

 

I will pick a minor nit: to get surround sound you have to have movement, either by moving your head or by moving the source, over time, that causes variation in the sound received. The brain is very good at analyzing the streams of data (within certain bands of the auditory range) to produce a three dimensional model, because tigers ate everyone that wasn't good at it. Unnaturally low bass notes are extremely hard to localize. Higher pitched things like people, screams, and rustling grass are easy.

 

But short events are hard to localize without repetition. Try to localize a gun shot all by itself in the woods. Try again with three fired from the same position (a way to sign that you are in trouble and need assistance). I have performed this experiment myself.

Posted

I have a minor nit to pick too... to get surround sound, you don't have to have movement, you simply have to have a time difference between when the sound is heard from one speaker or another. The brain creates an accoustical map based on the timing of when it hears the same sound in one ear or the other. If it hears the sound in the right ear before it hears it in the left, the sound is perceived to be to your right, and vice versa for sounds heard first on the left and then on the right. Sounds heard in the foreground heard before the background are perceived to be in front of you and vice versa for sounds heard in the background before in the foreground. The delays between speakers are very small and typically created from the placement of actual microphones for each speaker, but the effect can be artificially generated with sophisticated sound recording equipment or software. (There's actually a little more to it than what I've said, but that's the layman's explanation).

 

Sound is a longitudinal compression wave, as such, it does not have an "up or down", it has a more dense, less dense, or in some sense a forward and backward along the direction of propagation and a distance from the point-source of the propagation wave.

Posted

I have a minor nit to pick too... to get surround sound, you don't have to have movement, you simply have to have a time difference between when the sound is heard from one speaker or another. The brain creates an accoustical map based on the timing of when it hears the same sound in one ear or the other. If it hears the sound in the right ear before it hears it in the left, the sound is perceived to be to your right, and vice versa for sounds heard first on the left and then on the right. Sounds heard in the foreground heard before the background are perceived to be in front of you and vice versa for sounds heard in the background before in the foreground. The delays between speakers are very small and typically created from the placement of actual microphones for each speaker, but the effect can be artificially generated with sophisticated sound recording equipment or software. (There's actually a little more to it than what I've said, but that's the layman's explanation).

 

Sound is a longitudinal compression wave, as such, it does not have an "up or down", it has a more dense, less dense, or in some sense a forward and backward along the direction of propagation and a distance from the point-source of the propagation wave.

 

Perhaps I misspoke or was unclear. Stereo gets you a direction, but stereo over time can get you distance. It takes that "over time" element to get distance from natural effects like echo and reverberance. I think we are in agreement about the mechanism and effect.

Posted

I don't mean to be obtuse, but in high school my physics teacher told me that 3D sound was just "playing with volume". 

 

Is this too simplistic?

 

It might be, but in general you want the same sound to come to your ears from a few directions, direct, and reflecting off nearby surfaces. Listening to sound in a truly anechoic chamber is downright creepy in comparison.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.