Donnadogsoth Posted May 13, 2016 Posted May 13, 2016 In a handstand the world looks upside down. Why doesn't it sound upside down, too?
shirgall Posted May 13, 2016 Posted May 13, 2016 In a handstand the world looks upside down. Why doesn't it sound upside down, too? It does, but your brain compensates. Similar processes are at work when you are able to pick out a conversation in a crowded room. 1
Cuffy_Meigs Posted May 13, 2016 Posted May 13, 2016 I'm not quite sure what you are getting at but I'll have a go. The wavelength of light is tiny compared with the dimensions of the retina so an image can be resolved there with a distinct right, left, top and bottom. The wavelength of sound is a few billion times longer so an equivalent "retina" would have to be correspondingly large. Hearing therefore employs an eardrum instead which integrates the overall waveform and sends a composite signal via hammer, anvil, cochlea...etc. to the brain. Such directionality as we have with hearing is, I believe, done by comparing the relative intensity and arrival time at one ear compared with the other, using high level processing in the brain. 1
Will Torbald Posted May 13, 2016 Posted May 13, 2016 The image of an eye is two dimensional, and compound with another it becomes three dimensional. The sound of the ear is one dimensional, like a point. You can't turn a point upside down because it is the same on all sides. The experience of surround sound is from mixing two ears and distance.
shirgall Posted May 13, 2016 Posted May 13, 2016 The image of an eye is two dimensional, and compound with another it becomes three dimensional. The sound of the ear is one dimensional, like a point. You can't turn a point upside down because it is the same on all sides. The experience of surround sound is from mixing two ears and distance. I will pick a minor nit: to get surround sound you have to have movement, either by moving your head or by moving the source, over time, that causes variation in the sound received. The brain is very good at analyzing the streams of data (within certain bands of the auditory range) to produce a three dimensional model, because tigers ate everyone that wasn't good at it. Unnaturally low bass notes are extremely hard to localize. Higher pitched things like people, screams, and rustling grass are easy. But short events are hard to localize without repetition. Try to localize a gun shot all by itself in the woods. Try again with three fired from the same position (a way to sign that you are in trouble and need assistance). I have performed this experiment myself.
EclecticIdealist Posted May 13, 2016 Posted May 13, 2016 I have a minor nit to pick too... to get surround sound, you don't have to have movement, you simply have to have a time difference between when the sound is heard from one speaker or another. The brain creates an accoustical map based on the timing of when it hears the same sound in one ear or the other. If it hears the sound in the right ear before it hears it in the left, the sound is perceived to be to your right, and vice versa for sounds heard first on the left and then on the right. Sounds heard in the foreground heard before the background are perceived to be in front of you and vice versa for sounds heard in the background before in the foreground. The delays between speakers are very small and typically created from the placement of actual microphones for each speaker, but the effect can be artificially generated with sophisticated sound recording equipment or software. (There's actually a little more to it than what I've said, but that's the layman's explanation). Sound is a longitudinal compression wave, as such, it does not have an "up or down", it has a more dense, less dense, or in some sense a forward and backward along the direction of propagation and a distance from the point-source of the propagation wave.
shirgall Posted May 13, 2016 Posted May 13, 2016 I have a minor nit to pick too... to get surround sound, you don't have to have movement, you simply have to have a time difference between when the sound is heard from one speaker or another. The brain creates an accoustical map based on the timing of when it hears the same sound in one ear or the other. If it hears the sound in the right ear before it hears it in the left, the sound is perceived to be to your right, and vice versa for sounds heard first on the left and then on the right. Sounds heard in the foreground heard before the background are perceived to be in front of you and vice versa for sounds heard in the background before in the foreground. The delays between speakers are very small and typically created from the placement of actual microphones for each speaker, but the effect can be artificially generated with sophisticated sound recording equipment or software. (There's actually a little more to it than what I've said, but that's the layman's explanation). Sound is a longitudinal compression wave, as such, it does not have an "up or down", it has a more dense, less dense, or in some sense a forward and backward along the direction of propagation and a distance from the point-source of the propagation wave. Perhaps I misspoke or was unclear. Stereo gets you a direction, but stereo over time can get you distance. It takes that "over time" element to get distance from natural effects like echo and reverberance. I think we are in agreement about the mechanism and effect.
luxfelix Posted May 13, 2016 Posted May 13, 2016 Would the change in orientation lead to a change of phase as well?
shirgall Posted May 13, 2016 Posted May 13, 2016 Would the change in orientation lead to a change of phase as well? Phase differences relate to the same sound taking different paths to get to you. https://www.soundonsound.com/sos/apr08/articles/phasedemystified.htm 1
ValueOfBrevity Posted May 13, 2016 Posted May 13, 2016 I don't mean to be obtuse, but in high school my physics teacher told me that 3D sound was just "playing with volume". Is this too simplistic?
shirgall Posted May 13, 2016 Posted May 13, 2016 I don't mean to be obtuse, but in high school my physics teacher told me that 3D sound was just "playing with volume". Is this too simplistic? It might be, but in general you want the same sound to come to your ears from a few directions, direct, and reflecting off nearby surfaces. Listening to sound in a truly anechoic chamber is downright creepy in comparison.
EclecticIdealist Posted May 14, 2016 Posted May 14, 2016 It's not just about volume, although volume plays a crucial part, timing and duplication plays an equally crucial part as well.
luxfelix Posted May 15, 2016 Posted May 15, 2016 Phase differences relate to the same sound taking different paths to get to you. https://www.soundonsound.com/sos/apr08/articles/phasedemystified.htm Thank you for the link.
Recommended Posts