Funny how we are really good, for the most part, at knowing where sounds are coming from. And it is funny since the ear provides the brain with no direct information about the actual relationship in space of different sound sources. Instead, the brain makes use of what happens to the sound as it reaches both ears by virtue of, well, being a sound wave and that we have two ears separated in space.
Imagine a sound coming from the front, the sound will arrive to the two ears at the same time. But if it is coming from the right it will arrive to the right ear first, and to the left ear a wee later. This ‘time difference‘ will depend on the speed of sound in air and how far apart our ears are. Even more, as the sound source moves from the far right to the front of the head those time differences will become smaller and smaller, until they are zero at the front. If one could put one microphone in each ear, one could reliably predict where the sound comes from by measuring that time difference. And this is exactly what a group of neurons in the brain does.
Easy enough? Not quite.
The way the brain works is that things on the left side of our body are mapped on the right side of our brains, and things on the right side of our bodies are mapped on the left side of our brains. So the ‘time comparison’ neurons on the right side of the brain deal mainly with sound from coming from the left (and neurons dealing with the sound from the right are on the left side of the brain). But to do the time comparison these neurons need to get the information from both ears, not just from only one side!
This raises this conundrum: the neural path that the information from the left ear needs to travel to get to the same (left) side of the brain will inevitably be shorter than the path travelled by information coming from the other side of the head. So how does the brain overcome this mis-match?
And here is where having paid attention at school during the “two trains travelling at the same speed leave two different stations blah blah blah” math problem finally pays off. When a sound comes from the front, the information arrives to each of the ears at the same time. The information also arrives to the first station in the brain (nucleus magnocellularis) at the same time. But time comparison neurons need information from both ears, and the path that the information needs to travel from the right side to the time comparison neurons in nucleus laminaris on the left side (red arrow in figure 1) is longer than the path from the same side (blue arrow in figure 1).
However, when you look into an actual brain, things are not so straight-forward (sorry for the pun). The axons from nucleus magnocellularis that go to the time comparison neurons on the same side of the brain take a rather roundabout route (as in figure 2). And for long we assumed that such roundabout way was enough to make signals from the left and right sides to arrive at about the same time.
Easy enough? Not quite
When Seidl, Rubel and Harris actually measured the length of the axons (red and blue) they found that there was no way that the information could arrive at about the same time and that the system could not work in the biological range. But this problem could be overcome (back to the old school problem) by having the two trains (action potentials rather) travel at different speeds. And this is something that neurons in the brain can relatively easily do in two ways: One is to change the girth or diameter of the axon. The other is to regulate how they are myelinated. Myelin forms a discontinuous insulating wrap around the axon, which is interrupted at what is called the Nodes of Ranvier. The closer the Nodes of Ranvier are, the slower the action potential travels down the axon.
What the group found was that both axon diameter and myelination pattern were different in the direct (blue) and crossed (red) axons. When they now calculated how long it would take for the action potential from both sides to reach the time comparison neurons in nucleus laminaris, adjusting speed for the differences in the two axons, they found that yup, that pretty much solved the problem.
Easy enough? Quite
Like the authors say:
The regulation of these axonal parameters within individual axons seems quite remarkable from a cell biological point of view, but it is not unprecedented.
But remarkable indeed, considering that this regulation needs to adjust to a very high degree of temporal precision. I have always used the train analogy when I lecture about sound localisation, and always assumed equal speed on both sides. Seidl, Rubel and Harris’ work means I will have to redo my slides to incorporate differences in speed. Hope my students don’t end up hating me!
Seidl, A., Rubel, E., & Harris, D. (2010). Mechanisms for Adjusting Interaural Time Differences to Achieve Binaural Coincidence Detection Journal of Neuroscience, 30 (1), 70-80 DOI: 10.1523/JNEUROSCI.3464-09.2010