Listen up: Our brains prepare to ‘tune in’ to a voice of interest

A fundamental challenge of human communication is to understand what someone’s saying in the presence of background noise—for example, when several conversations occur at the same time. In these situations, accurate speech understanding relies on the critical ability to direct attention a voice of interest and ignore background sounds, such as the voices of other talkers. Several factors are known to improve speech understanding in noisy listening environments. Most previous studies have focused on the acoustic factors that improve people’s ability to focus on a target talker. For example, we know that listeners achieve better speech understanding when the target talker is in a different location to other background talkers (e.g. if the target talker is on the left and background talkers are on the right, Fig. 1A, compared to when both target and background talkers are on the left, Fig 1B) or when the target talker is a different gender than background talkers (e.g. if the target talker is female and background talkers are male, Fig. 2A, compared to when both target and background talkers are female, Fig. 2B).

Schematic of different location advantage

Fig. 1. Schematic of different location advantage. A listener (white circle) achieves better speech understanding when the target talker (green) is located on a different side (e.g. left) to other background talkers (grey, located on right) (A), than when the target talker is located on the same side (e.g. left) as other background talkers (also left) (B).

It is, however, becoming increasingly apparent that cognitive processes can also affect speech understanding in noisy environments. For example, listeners are better able to understand a target talker if they know characteristics of that talker before he or she starts speaking. Listeners understand speech more accurately when they know in advance whether the talker is male or female, whether the talker is located on their left or right side, or when they can predict the duration of time that will elapse before the talker starts speaking. Together, these findings suggest that, when multiple talkers speak at the same time, listeners can use information about characteristics of a target talker to predict what that talker’s voice will sound like and to help them better focus on what that talker’s saying. Importantly, these findings demonstrate that speech understanding not only depends on the acoustic characteristics of a listening environment, but also on the prior knowledge that a listener has access to.

In a recent study, we measured brain activity using electro-encephalography (EEG) when participants were given predictive information about a target talker’s location or gender. When participants were given this information, they displayed preparatory brain activity before the talkers started speaking. Preparatory brain activity started very soon after participants were told the location or gender of the talker and was sustained up until the talkers started speaking. These results add to an increasing body of evidence suggesting that, before a talker starts speaking, listeners activate the neural circuitry required to ‘tune in’ to a target talker, based on characteristics of their voice. This preparatory activity likely helps listeners to focus their attention on a voice of interest in noisy environments, which are commonplace in everyday life.

Schematic of different gender advantage.

Fig. 2. Schematic of different gender advantage. A listener (white circle) achieves better speech understanding when the target talker (light pink) is a different gender (e.g. female) to other background talkers (blue) (A), than when the target talker (light pink) is the same gender as other background talkers (dark pink) (B).

In the same set of studies, we also found similar brain activity in children aged 7–13 years as we found in adults. The findings imply that children aged 13 and older are already developing similar brain activity to adults when they listen to speech in noisy environments. Specifically, the results suggest that children may be able to use predictive information about whether a voice is located on their left or right side to help them listen to that person’s voice. This ability may be crucial for development because children are often required to learn language in classrooms that are noisy, such as when several children talk at the same time as the teacher.

Emma Holmes
大脑and Mind Institute, University of Western Ontario

Publication

EEG activity evoked in preparation for multi-talker listening by adults and children.
Holmes E, Kitterick PT, Summerfield AQ
Hear Res. 2016 Jun

Facebook twitter linkedin mail Facebook twitter linkedin mail

Leave a Reply