Music the Way it Was Meant to be Heard

Music the Way it Was Meant to be Heard

In the past, hearing aids processed music and speech sounds similarly, often distorting the clarity and quality of the music for the listener in an effort to make speech more clear. Patients would complain music sounded “muted, distant, fuzzy, dull, muddy, thin, steely [or] compressed.”

Why does music need a unique prescription?

Music signals have different acoustic features from speech signals, different dynamics and different spectral characteristics. The goals for listening to speech are different from the goals involved with listening to music. Thus, in the past, hearing aid speech processing often conflicted with music listening goals, degrading musical quality and making listening frustrating for those with hearing loss.

New Starkey technology processes speech and music independently

All that has changed with our new Synergy platform and Acuity OS operating system technology in our Muse, Halo 2 and SoundLens Synergy hearing aids. For the first time ever, hearing aid wearers can hear music the way it’s meant to be heard.

Synergy is the first hearing aid platform to use twin compressor technology. Only by using twin compressor technology, have we been able to simultaneously process sounds uniquely. With a suite of features designed specifically for music, music comes through clear, crisp and enjoyable. A higher sampling rate also extends the hearing aid’s bandwidth up to 10kHz, enabling music to sound richer and fuller.

How we did it: research and process

Hearing aids have been successful in improving speech intelligibility, but music dynamics are much broader and more slowly varying than those of speech. Spectral variation is also wider and more significant to the perception of music than speech. And unlike speech, which is largely single-source and often mixed with distracting environmental sounds, music is inherently multi-source and rarely embedded in noise.

We worked closely with musicians and current hearing aid wearers

In order to enable our hearing aids to process music uniquely, our scientists and technicians worked closely with musicians and current hearing aid wearers to determine the best way to run the complex algorithms so as to provide high-definition music reproduction.

“Our patients have told us they want to hear music better with their hearing aids, so over the last few years, our goal was to design something to provide listening enjoyment for people who enjoy music at home and that can also perform for musicians who are in high-demand musical situations,” Principal Research Engineer Kelly Fitz said.

Our engineering team knew in the beginning that they’d have to design something completely new and unique for music. With input from both current hearing aid wearers and professional musicians, they ultimately found two things:

People like it best if they can use a volume knob to adjust the music. People don’t want a lot of frequency shaping or compression, especially for loud music. They just want their music to be natural sounding.
Even with a volume knob to adjust sound levels, there are still soft parts of the music that people can’t hear and loud parts that hurt.

“Ideally we wanted to make the sound high-quality and be transparent, but the challenge was that some parts of the music were too loud and other parts they still couldn’t hear,” Fitz said.

We delivered hearing technology for the next generation of hearing aid wearers

The team decided they needed to use the hearing aid’s existing technology to provide little-to-no amplification for the portions of the music that were too loud for listeners and amplify the sounds that were too quiet. In short, the team had to fix what wasn’t working without messing with what was working—a true balancing act.

The Solution:

As mentioned above, music is multi-source and multi-elemental and requires a hearing ability to process each sound differently from speech. In order to enable our hearing aids to process music uniquely, we wrote new code specifically designed for listening to music. The result is Synergy and Acuity OS, two new hearing technologies that achieve that delicate balancing act and enable hearing aids made to enhance both speech clarity and listening enjoyment.

Why Use Musicians?

“We used dedicated musicians because not everyone is extremely sensitive to the way music sounds,” Fitz said of choosing a sampling of composers and musicians. “Musicians have more of a vocabulary for what they are hearing, so we figured the fastest way to figure out what people want and don’t want when listening to music is to work with those who are highly sensitive to the way it sounds and can fully articulate what they hear and what they want.”

What you hear can be hard to describe and music is especially hard to put into words. Another reason for including musicians when building this prescription was that people who play music are exposed to situations which are more demanding of a hearing aid’s ability to process music. “Playing on stage, in an orchestra or band, these are more demanding of their hearing and more importantly, more demanding of their hearing aids,” Fitz said. “It’s their work, what they do for living, so we wanted to make sure that what we designed could go beyond just listening enjoyment and perform in high-demand musical situations, including musical performances.”