Our lives in sound
Our lives are filled with sound. On average, Americans listen to music for more than 32 hours a week (Nielsen 2017 study). We spend hours in conversation with co-workers, friends and families. We hear the everyday sounds of traffic, appliances in our homes, television, athletic events, pets, and a great deal more. We never think about what our brains do with all of that – sometimes competing – auditory information.
But according to Dr. Nina Kraus, Director of the Auditory Neuroscience Lab (Brainvolts) at Northwestern University, making sense of sound is one of the most computationally complex tasks we ask our brains to do. Not only is there a staggering amount of information to process (something on the order of 9 million bits of data per second1), we have to process information in microseconds in order to respond if necessary.
And making sense of sound is not just a matter of hearing. We may have normal hearing but have trouble processing the sound. Auditory processing refers to the listening skills that make it possible for us to make meaning from sound, to understand what is being said in the classroom, to know from the boss’s tone of voice if she is angry.
Kraus and her colleagues study auditory processing in the brain, and they have found that our everyday sound experiences are a kind of learning process for the brain that shapes or changes our brain’s ability to process sound (neuroplasticity).
Sometimes those changes in the auditory system are positive, as in musicians and bilinguals; sometimes negative, as in hearing loss, aging, HIV, concussion, disabilities, or living in poverty. Kraus writes, “No two people hear the world exactly the same way because acoustic experiences impart enduring biological legacies.” 2
Researchers have known for more than 20 years that music training (as in learning an instrument, not studying about music) causes changes in the auditory cortex that are associated with improved auditory skills. That seems obvious. We would expect a musician to have finer pitch and rhythm discrimination than a non-musician because he has had a lot of practice. And the amount of change has been found to correlate with the age at which a person began to practice.
But Kraus has found that the fine tuning of the auditory system in musicians transfers to other auditory domains such as speech, language, reading, emotion, and auditory processing, so studying music has a positive impact on other kinds of learning.
What makes Kraus’s work particularly exciting is that she and her lab colleagues practice translational science, meaning their research is grounded in medical, social or educational issues. They conduct cutting-edge research in the lab, but they also take their work into school, community, and clinical settings, applying their discoveries to specific populations.
They become advocates for using their discoveries to improve the health or well-being of individuals, whether the research has to do with the importance of music education in schools, music therapy, or the impact of concussion (her most recent work). The importance of music making and music education have figured prominently in her research and there is no neuroscientist today who is a more passionate advocate for music education in schools than Nina Kraus.
Why does studying music affect speech processing?
Any sound, whether music, speech, or the train going by, is made of essentially three acoustical elements: pitch, timing, and timbre. Pitch is determined by frequency, and is on a scale of low to high. Timing refers to onset and offset of the sound, and timbre is determined by harmonics. Timbre is what makes a clarinet sound different from an oboe, and it is sometimes said that timbre is what is left of the sound that isn’t pitch or loudness.
Musicians spend years fine-tuning our response to these elements, focusing on slight gradations of pitch to be sure we are “in tune,” listening carefully to timing cues of those we are playing with to be as closely “in sync” as possible, being aware of timbre as we practice to produce a certain tone color or learning different techniques to change the timbre of the sounds we are producing (e.g., a violinist playing on an open string vs. fingered, or altering the pressure or speed of the bow).
And of course, we learn to distinguish the timbre of one instrument from another in an ensemble, even when they are playing the same pitch. The focused attention, manipulation of sounds we hold in working memory and connecting those sounds to meaning all use cognitive networks.
Connecting the sounds we want with the motor actions necessary to make those sounds involves sensorimotor networks. And because making music is something most of us love to do, reward systems in the brain are activated.
Kraus has found that when cognitive, sensorimotor and reward systems in the brain are all engaged at the same time, neuroplasticity occurs more readily in the auditory processing system. Making music involves all three systems and has been found to be a particularly strong driver of neuroplasticity.
But those fundamental components of musical sound – pitch, timing, and timbre, are also the fundamentals of speech processing. Pitch tells us whether we are hearing a statement or a question, helps us distinguish one speaker from another and gives us information about the emotional content of what is being said. In tonal languages such as Mandarin, variations in pitch of a given word can totally change the meaning.
Timing in speech, the onset of the sound, is necessary for differentiating like sounds, such as “ta”, “da”, or “ga.” And timbre, or the harmonics of the voice, give us information about the speaker. Your mother and your aunt Sally may be speaking at the same volume and at relatively the same pitch, but you can instantly recognize one from the other due to the different timbre of their voices.
Because music and speech share auditory neural pathways, and because music requires finer auditory distinctions for pitch, timing, and timbre, the enhancements gained in the auditory system through music making transfer to neural processing abilities necessary for speech, then to listening and language skills.3
In research involving thousands of participants from birth through the age of 90, Kraus and her colleagues have found that the repeated engagement with sound that is found in making music, and the constant attention to details of sound have a positive effect on speech processing and language skills.
Over the next few posts, we’re going to look at the how Kraus and the Brainvolts Lab measure auditory processing in the brain, and then look at the real-world applications of their work: the effect of music training on reading ability, on the biological impact on the aging process; on primary school children from gang reduction zones in Los Angeles, and on the ability to hear speech-in-noise with both adolescents and older adults.
We’ll also look at the impact of beginning to study music as late as high school and the positive effect on the adult brain of just a few years of study in childhood. And perhaps even more.
In case you can’t tell, I’m a huge fan of Kraus’s work. More posts will follow soon.
1 Anderson S, Kraus N (2011) Neural encoding of speech and music: implications for hearing speech in noise. Seminars in Hearing. 32: 129-141.
2 Kraus N, White-Schwoch T (2015) Unraveling the biology of auditory learning: a cognitive-sensorimotor-reward framework. Trends in Cognitive Sciences. 19(11): 642-654.
3 Anderson S, Kraus N (2011)