The study, publishing in the Open Access journal PLOS Biology on July 14th, centers on the child's ability to decipher speech -- specifically consonants -- in a chaotic, noisy environment. Preliterate children whose brains inefficiently process speech against a background of noise are more likely than their peers to have trouble with reading and language development when they reach school age, the researchers found.
This newfound link between the brain's ability to process spoken language in noise and reading skill in pre-readers "provides a biological looking glass into a child's future literacy," said study senior author Nina Kraus, director of Northwestern's Auditory Neuroscience Laboratory.
"There are excellent interventions we can give to struggling readers during crucial pre-school years, but the earlier the better," said Kraus, a professor of communication sciences, neurobiology and physiology in the School of Communication. "The challenge has been to identify which children are candidates for these interventions, and now we have discovered a way."
Noisy environments, such as homes with blaring televisions and wailing children, loud classrooms or urban streetscapes, can disrupt brain mechanisms associated with literacy development in school-age children.
The Northwestern study, which directly measured the brain's response to sound using electroencephelography (EEG), is one of the first to find the deleterious effect in preliterate children. This suggests that the brain's ability to process the sounds of consonants in noise is fundamental for language and reading development.
Speech and communication often occur in noisy places, an environment that taxes the brain. Noise particularly affects the brain's ability to hear consonants, rather than vowels, because consonants are said very quickly and vowels are acoustically simpler, Kraus said.
"If the brain's response to sound isn't optimal, it can't keep up with the fast, difficult computations required to process in noise," Kraus said. "Sound is a powerful, invisible force that is central to human communication. Everyday listening experiences bootstrap language development by cluing children in on which sounds are meaningful. If a child can't make meaning of these sounds through the background noise, they won't develop the linguistic resources needed when reading instruction begins."
In the study, EEG wires were placed on children's scalps; this allowed the researchers to assess how the brain reacted to the sound of the consonants. In the right ear, the young study participants heard the sound 'da' superimposed over the babble of six talkers. In the left ear, they heard the soundtrack of the movie of their choice, which was shown to keep them still.
"Every time the brain responds to sound it gives off electricity, so we can capture how the brain pulls speech out of the noise," Kraus said. "We can see with extreme granularity how well the brain extracts each meaningful detail in speech."
The researchers captured three different aspects of the brain's response to sound: the stability with which the circuits were responding; the speed with which the circuits were firing; and the quality with which the circuits represented the timbre of the sound.
Using these three pieces of information, they developed a statistical model to predict children's performance on key early literacy tests.
In a series of experiments with 112 kids between the ages of 3 and 14, Kraus' team found that their 30-minute neurophysiological assessment predicts with a very high accuracy how a 3-year-old child will perform on multiple pre-reading tests and how, a year later at age 4 he or she will perform across multiple language skills important for reading. The model proved its breadth by also accurately predicting reading acumen in school-aged children, in addition to whether they'd been diagnosed with a learning disability.
"The importance of our biological approach is that we can see how the brain makes sense of sound and its impact for literacy, in any child," Kraus said. "It's unprecedented to have a uniform biological metric we can apply across ages."
Other Northwestern co-authors include Travis White-Schwoch, Kali Woodruff Carr, Elaine C. Thompson, Samira Anderson, Trent Nicol, Ann R. Bradlow, and Steven G. Zecker, all of the Auditory Neuroscience Laboratory and department of communication sciences at Northwestern.
The team will continue to follow these children in its "Biotots" project as they progress through school.
Brain's ability to process consonants in noisy environment may reflect child's literacy potential
Background noise disrupts brain mechanisms involved in literacy development One of the first studies to establish brain-behavior links in pre-readers Results provide 'a biological looking glass into a child's future literacy' New way to identify which children are candidates for reading interventions
INFORMATION:
Please mention PLOS Biology as the source for this article and include the links below in your coverage to take readers to the online, open access articles.
All works published in PLOS Biology are open access, which means that everything is immediately and freely available. Use this URL in your coverage to provide readers access to the paper upon publication:
http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1002196
Contact:
Dr. Nina Kraus
847-491-3181
nkraus@northwestern.edu
Citation: White-Schwoch T, Woodruff Carr K, Thompson EC, Anderson S, Nicol T, Bradlow AR, et
al. (2015) Auditory Processing in Noise: A Preschool Biomarker for Literacy. PLoS Biol 13(7): e1002196. doi:10.1371/journal.pbio.1002196
Funding: This work was supported by NIH (R01HD069414; http://www.nichd.nih.gov & R01DC01510; http://www.nidcd.nih.gov) and the Knowles Hearing Center (http://knowleshearingcenter.northwestern.edu). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.