An Amp Guru – Music Synthesist’s Perspective on Deafness

Let me give you what I know about the science of sound. The term sound refers to the compression and rarefaction of an elastic medium in a contained space. This compression and rarefaction takes place within the range of 20Hz to 20KHz and moves at a rate of 340.29 meters per second. An individual sound is known as an event. Syllables of words are separate events. Each event consists of a fundamental frequency and harmonics of that frequency.

http://www.rkm.com.au/ANIMATIONS/animation-sine-wave.html

The fundamental frequency is filtered by its delivery system. In other words, the sound of a violin is generated by the strings, but filtered by the body of the violin. That’s why a violin sounds different from a guitar. The filtering is broken down into two components – the cutoff frequency, and the resonance. The former is the frequency above or below which sound will not pass. The latter is the addition of harmonic information relative to the cutoff frequency.

Finally, every sound event consists of 2 envelopes – amplitude and frequency. Both envelopes have four portions. They are attack, decay, sustain and release. Take for example the sound of a bass drum, vs. the sound of a pipe organ. The bass drum has a short attack. The sound is at its greatest amplitude immediately after being hit. There is a very short decay period, followed by very little sustain, and the reverberation at the end of the event is the release time. The organ, on the other hand, climbs to its loudest point, has no noticeable decay, sustains almost indefinitely and slowly fades out in its release. Many instruments also experience pitch changes during their events, and the frequency envelope governs those.

http://www.sonicstate.com/news/2009/07/03/marshall-stack-for-the-vertically-challenged/

What does this have to do with the Deaf?

Well, I’ve spent years synthesizing sound and hand building the machines that create or amplify it. Now, I’m on a different mission – the inverse. I’m trying to understand what exactly goes wrong with those ears that don’t work right.

Today I had a wonderful and informative meeting with Marsha Graham of – among others – AnotherBoomerBlog. Some of the many things we discussed were hearing aids, and a few of the different symptoms suffered by the Hard of Hearing. It was an enlightening experience for me. When a hearing person thinks of deafness, he tends to think in all or nothing terms. You just plain can’t hear – or you can hear, but the volume’s really low.

That’s not the case. Many Deaf and Hard of Hearing can hear, but only at certain frequencies. Often they hear, but their brains scramble the sounds. In other cases, they are unable to tune out certain noises while tuning in others. When the hearing speak in a crowded room, or on a city street, our ears – and our brains – filter out the unnecessary background noise. Many Hard of Hearing don’t have that filtering capability.

Therefore, hearing aids must employ much more sophistication than one might think. A hearing aid must be much more than simply a tiny microphone connected to a tiny amplifier. It needs to be capable of shifting frequencies, adding or removing filtering and altering envelope shapes. As I become more involved with the Deaf community, I find myself relying more and more on what I learned in its antithesis – music.

4 thoughts on “An Amp Guru – Music Synthesist’s Perspective on Deafness

  1. Psst – David, it is anotherboomerblog.wordpress.com 🙂

    And guess what? Most of the HoH and Deaf/deaf like music. Most individuals who are “deaf” are not 100% deaf. They have some small amount of residual hearing that can be augmented by hearing aids or cochlear implants or rare brain stem implants. This allows the brain to hear environmental sounds – the boom of drums, the sound of clashing symbols, even the general sound of music (although probably not individual voices).

    That was not the case with my first boyfriend who was born without auditory nerves. He had zero hearing. But he loved music that was loud and had base because he could feel the vibration.

    I prefer country music when I listen to music because I can often understand words. Rap? Never mind… Rock and Roll screamers? Not so much. Instrumental is nice, but I get tired of classial – “Wipeout” is good.

    I have to watch it, though with words and songs, one song said “F you” and that’s not what I heard. It had a nice melody, though. My daughter was mortified I was humming it.

    And yes, hearing is complicated. As you found out, I voice very well having had years of lessons – I even used to sing classical music in high school. OTOH I can get into situations where sound is totally beyond my capacity to handle and I just shut down unless there is someone to interpet for me. In fact, at the end of every day I pretty much have a “sound headache” from all the noise and the only way to cure it is to turn off the aid and let my brain rest.

    The digital aids now are fantastic. Since I’ve lost the top of my “hearing banana” they actually grab those sounds and place them in an area of the hearing spectrum I have left. It has taken me a little while to get a handle on this – when my grandson (6) is in the car behind me and starts talking my hearing aid grabs his voice and amplifies it so that it sounds as if he’s bellowing next to my eardrum – it is as if he’s turned into James Earl Jones at high volume. I have no explaination for it.

    Once I had hearing aids that turned my car door alarm (that beep that says the door is open) into a three part major harmonic chord. Go figure.

    Many of my HoH friends (some oral deaf) have a “deaf accent” which is a non-resonant voice. Even when I go total, I know how to tell if I have voice resonance by feeling the bridge of my nose. Those voice lessons were worth it after all, I guess. 🙂

    It was great visiting with you. I am fascinated by your knowlege of sound!

    Like

  2. (Doh!) OK. I fixed it. The link is good. 🙂

    What you said about your grandson makes sense. If the hearing aid shifts frequencies from the range you can’t hear into the range you can, some of those frequencies – for example the high pitch of a child’s voice – will obviously switch with an excessive amplitude. The following comparison is brought to mind.

    Of the amps I worked on, my favorites were always bass amps. This is because bass amps need to be orders of magnitude more powerful than lead guitar or rhythm guitar amps. I’ve had cases where a guitarist with nothing more than a 50 watt amp could easily drown out the bass with a 350 watt one.

    Humans (with normal hearing) have a peak between 1 and 3Khz. That peak is more dominant in males, because our ears are tuned to your voices. Yours are tuned to the frequencies of a crying baby. It is much harder for people to perceive a sound at say 30Hz than one at 1.5KHz. Hence, guitars cut through a room like razors, while the poor bass player struggles to be heard at all.

    Like

Agree? Disagree? Please speak up.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s