3 Comments

My mother used to call out to me in the house as a kid. I’d ask where she was and she’d reply “here”.

It drove me nuts.

You see, when you can only hear in one ear it’s hard to tell the direction a sound comes from and it’s frustrating to be asked to locate someone by sound alone. It’s particularly hard if the sound is coming from above or below you.

After too much of this I would whine “where’s here”. Part of me figured she’d cotton on that way…

A few weeks ago, it was Deaf Awareness week. I’m deaf or hard of hearing, depending on where you draw the line. One ear is totally deaf, the other ear has a 50-60dB loss, which about half of normal hearing. With me, deaf awareness week continues all year!

I’m not an expert on the science of deafness. I’ve got my own scientific interests and I’m not inclined to pour over the details of my “disability”. (I’ve two minds about that term, too.) I thought I might use this blog as an excuse to learn a little more about aspects of hearing and deafness, and pass on a little of what I learn. The early articles will not go into depth, but lay out the scene as context for more detailed posts later (assuming there is interest).

Try this game with a friend. Stand outside and plug one ear. Get a friend to go inside and hide a ringing cell phone, cordless phone or alarm. The lower-pitched the alarm tone, the better. Now go inside and try locate the device by sound, with one ear plugged. (Let me know how you get on.)

Locating direction by sound is more indirect than seeing an object. When you look at an object, the location of the object on the retina of your eyes directly relates to where the object is. Sounds have no direct record of “placement” of the object like this. Instead, your brain uses a number of clever to determine the source of the sound by analysing the properties of the sounds coming into your ears.

The simplest way of locating sound uses the screening effect of your head or body. The signals received by the two ears are compared in your brain. Sounds coming from one side of you are very slightly louder and arrive milliseconds earlier to the ear nearest to the sound¹. It’s really impressive that your brain can pull this off. Your neurons (nerve cells) are much, much slower than sound waves, yet they still manage to compare these signals “in real time”. On top of that, there are all sorts of sounds arriving in your ears at once, yet your brain still manages to match the appropriate sounds. Your brain gets around this by comparing a range of delays of different time lengths in at the same time, in parallel.

The difference in intensity between your two ears is particularly strong for higher-pitched sounds (sounds that have shorter wavelengths). The “acoustic shadow” effect of your head that affects how loud a sound depends on the frequency of the sound, low pitched sounds are harder to locate this way².

Picking out the vertical location of sound relies more on the path that the sound takes to reach your ears. Sounds that don’t go directly into your ear canal (ear hole) first reflect off your pinna, the fleshy outer part of your ear the sticks out on the side of your head. When the sound reflects off these surfaces, the sound waves are altered slightly. When the sound waves that have travelled different routes to your inner ear (and the slightly different times of arrival to the inner ear) are superimposed, your brain is able to determine the vertical location of the sound.

Without having hearing in both ears, you don’t have two sources of input to determine the sound location from.

Another benefit of hearing in both ears is that you are better able to pick out and follow a sound you’re interested in in a background of other noise. People who can hear well in both ears can usually continue to follow a conversation in a noisy place fairly well, provided there is not too much similar or louder sound coming from the same direction as the speaker you’re interested in. If the competing conversations and noises are from different directions, people who can hear in both ears can use their binaural hearing to “track” the conversation that they want to hear by selectively concentrating on the sounds coming from a particular source.

Those without hearing in one ear find this hard. Next time you’re in a pub or noisy restaurant, trying plugging one ear for a while and see how hard it is to follow the conversation. You should find that rooms with a lot of echo are particularly bad.

This particular loss can socially isolate people, leaving them only able to sit and nod in group conversations.

Modern hearing aids are very sophisticated–they’re essentially miniaturised digital signal processing computers–and have a number of ways of trying to assist hearing in noisy environments, including the use of directional microphones and “voice tracking” technology. I hope to next briefly look at the technology used to aid those with a hearing loss, before looking at the details of our brains process sounds.

References

1. Jan W H Schnupp & Catherine E Carr Nature Neuroscience 12(6)692 (2009) On hearing with more than one ear: lessons from evolution DOI:10.1038/nn.2325 (You’ll need a subscription to Nature Neuroscience or access to a university library to read this; it’s readable for most non-specialists.)

2. A short introduction to location of a sound source can be found at Jeroen Breebaart’s website. Jereon is a Principal Scientist at Philips Research, who previously worked on digital signal processing. (For the curious, browsing the Philips Research website makes for some interesting reading of new technologies under development.)

Notes

1. The difference in timing of the two sounds is called the interaural time delay (ITD) and the difference in intensity (roughly, loudness) of the sounds are called the interaural intensity difference (IID). The actual problems of locating the timing of two different sounds and inferring direction from this involve a bit of understanding sound waves and other issues. I hope to pick up on these in a later post, for now I want to keep it simple.

2. This effect starts to be noticed at about 500 Hz and is clearest for sounds higher pitched than about 2 kHz. kHz is short for kilohertz. Hertz is a measure of frequency, how often a thing “cycles” per second. You’ll have heard of the speed of a computer being described in gigahertz (billions of cycles per second). 2 kHz means that the waves of the sound cycle 2 thousand (kilo) times second. Human hearing typically covers the range 20 Hz to about 16000 Hz (or 16 kHz).

Related posts:

Things I didn’t write about today: tinnitus …

Automatic video captions for YouTube

Note added after release:

The estimate of the range of frequencies that human hearing covers varies a little from source to source. I have always taken it as been from 20 Hz to 20 kHz, in part for the very non-scientific reason that “20-to-20″ is easy to remember! My main source indicated an upper limit of 16 kHz, so I deferred to that estimate. It has been pointed out to me that this may be low because since older people tend to loss hearing in the upper end of the frequency range, a survey of a whole population will have fewer people with good hearing in the upper range. (Thanks Fabiana.)

© Grant Jacobs, all rights reserved.