This post builds upon the previous two about the sense of hearing, here and here. It is probably best to read at least the previous one, because some understanding is required about the 'cone of confusion'. That is a key concept to understand why combining information from two ears is not enough to determine the direction of a sound source accurately.
To summarise, sound from a given source will take longer to arrive at the ear farthest from the source than at the nearest ear; it will also arrive less loud at the farthest ear. The brain measures the differences in arrival time and in loudness between the ears and computes an angle between two lines; the first runs from the nearest ear to the source of the sound, and the second is the axis connecting the two ears.
If you know something about how to indicate a direction in 3D space, you will realise that you cannot do that with just one angle. If you wish to indicate the position of the sun in the sky from a point on the Earth's surface, you need two angles: one tells you the compass direction, and the other indicates elevation above the ground. With hearing with tow ears, the one angle you get is relative to the axis between the ears. The sound can come from all directions obtained by sweeping that angle around that axis. The result is the 'cone of confusion'. Do not worry, an image may help.
Click to enlarge; copyright Gert van Dijk |
Here is a new animal, one whose head is elongated enough to accommodate two pairs of ears: the front and hind pairs. Let's explore whether having more than two ears helps. The small golden globe represents a sound source. To start with, consider just the front pair of ears. I have drawn the line from the source to the nearest ear, the axis between the ears, and the angle between them. Rotating the around the axis results in the specific 'cone of confusion' for this particular sound source for the front ears. Let's call it the 'front CoC'. By the way, the cone is drawn as if it stops at the sound source; in reality it extends into space.
Click to enlarge; copyright Gert van Dijk |
The sound will also be picked up by the hind ears, and we will assume that the animal’s brain performs a similar analysis for the hind ears. There is therefore a new angle, now between the source and the axis through the hind ears, and also a new cone: the ‘hind CoC’.
Click to enlarge; copyright Gert van Dijk |
There would be no point in having four ears if the two localisation systems could not be combined. The source of the sound must lie on both cones, so we need to find the parts that the two cones have in common: their intersection. That intersection is, in this case, a nice parabola. It is shown as stopping at the end of the cones but should again extend into space along with the cones. Does this improve localisation? Yes: the possible source of the sound is reduced from the surface of a cone to a line in 3D.
Click to enlarge; copyright Gert van Dijk |
But the brain can do more with the information already available. We can also combine left ears and right ears. In the drawings, the sound source was placed at the right side of the animal, so let’s use the right ears, with the sound nearer the front than the hind ear. The principles are exactly the same: we draw the axis running through the ears, determine the angle between the source and the axis and rotate it around the axis: the ‘right CoC’. In this example the cone is a bit more difficult to visualise because the source lies between the two ears involved, but the principle remains the same. If we combine the new cone with the parabola we already had, we find that the only solution now consists of two points.
I stopped here, even though the ‘left CoC’ could still be added. You could also form two diagonal combinations, meaning the left front with right hind ear, and the right front with the left hind ear. With four ears, there are therefore six possible CoCs, and together that should be enough to solve the problem. However, I doubt that the animal’s brain would perform all the calculations after one another; there is probably a smarter way to get the correct answer.
The position of the ears could be more creative too. In the example above, the four ears all lied on a plane, but that is not necessary. Suppose that the front ears lie on a horizontal line, so there would be a front left and a front right ear, like in the example. But the hind ears might lie on a vertical line, yielding an upper hind ear and a lower hind ear. The four ears would lie on the corners of a tetrahedron. There may also be other ways to improve the location of sounds, so perhaps this will not be the last post of sound localisation.
4 comments:
Most definately food for thought; thanks for this informative and interesting chapter.
It definately helps me better visualize and grasp the options and the workings of it all.
Hmm...perhaps the early tetrapods who invaded the land, were not big enough to need a second pair of ears (perhaps formed from part of the lateral line) ?
-Anthony C. Docimo.
Anthony: I doubt that there was ever an anatomical starting point to allow multiple ears in land-living vertebrates. In that sense it is like the number of limbs or eyes. In speculative biology you find that people discuss the supposed superiority of four or six legs, but I have not yet seen such discussions about the number of ears. Perhaps people thought of that as a constant, not as a variable.
There is a vaguely plausible way that multiple ears could evolve that gives directionality in a slightly different way. It's based on the concept of a leaky wave antenna which I've used in radar. It's typically used to transmit (including acoustic signals) but the same concept works in reverse.
Basically, it is a hollow waveguide with an array of apertures at regular intervals with a single receiver at one end. When a wave is incident upon the array a series of waves (one per aperture) will propagate within the waveguide with a constant phase difference between them. This phase difference will be a function of both frequency and angle of incidence.
The end result is that constructive interference will occur in the waveguide such that the frequency of the signal reaching the receiver is related to the angle of incidence. This allows a single ear with a set of frequency dependent receptors to determine angle in 1D without complicated processing. The same concept can be extended to 2D with a 2D array of apertures.
Note that this does mean that pure tones would be inaudible unless they came from the correct direction. However this wouldn't be a problem in the natural world as sounds of interest would cover a wide frequency range.
It isn't really fundamentally different to how the pinna works but it's relatively easy to see how a spiracle based hearing system could produce this. With the addition of a bigger brain it could even evolve into a complicated acoustic camera if each aperture became an independent ear.
This is such a nice post
Post a Comment