SAN FRANCISCO, Jan. 30 (UPI) -- Researchers in California say analysis of how speech sounds are identified by the human brain offers insights into the basis of human language.
While scientists have known for some time the location in the brain where speech sounds are interpreted, little has been understood about how exactly the process works, neuroscientists at the University of California, San Francisco, said.
Writing in the online journal Science Express, they say they've found the brain does not respond to the individual sound segments known as phonemes -- such as the b sound in "boy" --but is instead exquisitely tuned to detect simpler elements, which are known to linguists as "features."
"Features" are distinctive acoustic signatures created when speakers move the lips, tongue or vocal cords. For example, consonants such as p, t, k, b and d require speakers to use the lips or tongue to obstruct air flowing from the lungs, then release a brief burst of air linguists call "plosives." Others, such as s, z and v, are grouped together as "fricatives," because they only partially obstruct the airway, creating friction in the vocal tract.
An area of the brain known as the superior temporal gyrus is precisely tuned to robustly respond to these broad, shared features rather than to individual phonemes like b or z, the UCSF researchers said.
This improves the brain's ability to interpret speech, they said, since the articulation of phonemes varies considerably across speakers and even in individual speakers over time, making it advantageous for the brain to employ a sort of feature-based algorithm to reliably identify phonemes.
"It's the conjunctions of responses [to features] in combination that give you the higher idea of a phoneme as a complete object," neuroscientist and lead study author Edward F. Chang said. "By studying all of the speech sounds in English, we found that the brain has a systematic organization for basic sound feature units, kind of like elements in the periodic table."