Source: Laboratory of Jonathan Flombaum—Johns Hopkins University
Spoken language, a singular human achievement, relies heavily on specialized perceptual mechanisms. One important feature of language perception mechanisms is that they simultaneously rely on auditory and visual information. This makes sense, because until modern times, a person could expect that most language would be heard in face-to-face interactions. And because producing specific speech sounds requires precise articulation, the mouth can supply good visual information about what someone is saying. In fact, with an up-close and unobstructed view of someone's face, the mouth can often supply better visual signals than speech supplies auditory signals. The result is that the human brain favors visual input, and uses it to disambiguate inherent ambiguity in spoken language.
This reliance on visual input to interpret sound was described by Harry McGurk and John Macdonald in a paper in 1976 called Hearing lips and seeing voices.1 In that paper, they described an illusion that arises through a mismatch between a sound recording and a video recording. That illusion has become known as the McGurk effect. This video will demonstrate how to produce and interpret the McGurk effect.
One place that the McGurk effect has been important is in understanding how very young infants learn spoken language. A study in 1997 was able to show that even 5-month-old infants perceive the McGurk effect.2 This is important because it suggests that visual information may be used by infants to solve a major challenge to learning language-parsing a continuous audio stream into its units. Think about how a foreign language spoken at its normal speed can seem like such a jumble that you might not even know whe…