Researchers at Purdue University analyzed how sounds picked up by normal-hearing ears are understood by the brain, and how varying speed components of sound waves each contribute to speech perception.
"Sound can be divided into fast and slow components, and today's cochlear implants provide only the slow varying components that help people with profound hearing loss hear conversations in quiet rooms, but don't allow them to hear as well in busy restaurants," Michael G. Heinz, a professor of speech, language and hearing sciences, said. "It has been thought that the fast varying sound components -- which can't be provided with current cochlear implant technology -- help to hear in noisy environments."
However, the study did not bear this out, he said.
"We found that slowly varying neural components actually play the primary role in helping the brain understand speech in noisy environments," Heinz said. "The critical fast varying acoustic components are actually transformed by the normal-hearing cochlea into slower neural components to ultimately help people hear better."
The study focused on neural processing and how fast and slow varying components each contribute to speech perception.
"Some have thought that one component can exist without the other, but now we know this is impossible to achieve in the ear, and this new knowledge can help scientists who are working to improve cochlear implant design," researcher Jayaganesh Swaminathan said.
Ray Liotta sues skin care company over use of likeness
Millions of Getty images now available for free via embed tool