Understanding language involves complex cognitive processes, and—given the incredible number of word choices and arrangements that can form a single sentence—the brain must be able to distinguish between coherent and incoherent combinations.
A person’s comprehension of a sentence, whether spoken—like when a mother tells her son that she’s going to the store—or written in a book, depends, in part, on what the brain anticipates the next word in the sentence to be.
For example, if someone begins to read "It was a dark and stormy..." at the beginning of a book, it is expected that "night" will be the following term.
However, occasionally unexpected words are encountered—like "...and the mad scientist was painting his laboratory the color raccoon..."—that disrupt the sentence’s meaning.
In this instance, the anomalous term is raccoon, as it refers to a type of animal, rather than an expected color, like black.
Such semantic incongruities—the senseless sentences—elicit unique electrical signals in the brain—responses known as event-related potentials, ERPs for short—that may provide insight into how the brain either retrieves the definition of, or reprocesses, the troublesome word in an attempt to comprehend the sentence.
This video explains how the technique of electroencephalography, or EEG, can be used to measure ERPs during semantic incongruity tasks, in which participants are shown sentences ending with unexpected words.
We demonstrate how to design stimuli, and collect and analyze data, specifically focusing on a unique component of ERPs, named N400 to reflect its characteristics.
In this experiment, EEG is used to measure brain activity in participants shown semantically coherent and incoherent stimuli, in order to investigate language processing and comprehension.
These stimuli consist of three kinds of sentences: congruous, incongruous, and size-deviant. Although each is composed of seven words, they differ in the nature of their last terms.
The final words in congruous sentences, like "She scratched her dog behind its ear.," pose no problems with meaning, and appear in the same font type—and size—as those preceding it.
Importantly, these sentences serve as controls to gauge how the brain responds to coherent word combinations.
In contrast, incongruous sentences, like "She dipped her chicken finger in boots.," possess last terms that are semantically anomalous.
Here, boots conflicts with the meaning of the rest of the words—it is expected that chicken fingers would be dipped in a condiment like mustard, not in articles of clothing. Thus, these stimuli evaluate how surprising, incoherent language is processed.
The final type of sentences are called size-deviant, and contain last words that are surprising in appearance—they are in a larger font—but not congruity.
For example, if in the sentence "He put his hand in his mitten.," the term mitten is written in bigger letters, it still makes semantic sense.
These stimuli are critical, as they are meant to distinguish whether the brain’s response to the last word in a sentence is the result of general surprise—the shock of an inconsistent text size—or is specific to unexpected meanings.
After participants are prepared for EEG, they are told to carefully read sentences that appear on a computer screen, as questions will be asked about them later on.
In reality, no quiz is given at the end of the experiment; however, these instructions ensure that subjects will pay attention to the upcoming stimuli.
During the task, participants are sequentially shown—in the correct order—the seven words that make up a single sentence.
Each term appears individually in the center of the monitor—to reduce eye movements that could interfere with data collection—for 100 ms, and is followed by 1000 ms of blank screen.
EEG information is continuously recorded over 120 such trials, each of which consists of a unique sentence. Specifically, stimuli are shown at the same frequency—40 times—but in a random order. Then, the task is repeated a second time, so participants must read a total of 240 sentences altogether.
Afterwards, EEG data are processed to visualize average ERPs for each type of sentence—from each electrode—and scientists search for the N400 component in these waveforms.
The "N" in this term indicates that the peak is negative, and the "400" represents its latency—that it occurs roughly 400 ms after the last-word stimulus is shown to the participant.
Based on previous experience, it is expected that the amplitude of N400 will increase in response to semantically inconsistent events, and will be recorded from all scalp electrodes.
However, this response will likely be most prominent at the Pz electrode, positioned in the midline of the scalp above the parietal lobes—regions of which are known to be involved in processing and integrating written language.
Prior to beginning the experiment, recruit a participant who is a native English speaker, and explain to them the two main components of the procedure: that they will be wearing electrodes, and be shown sentences on a computer screen. Then, collect from them all of the necessary, signed consent forms.
Next, outfit the participant with scalp and face electrodes. For more details on this procedure, check out the methods described elsewhere in this collection. Once in the testing space, verify impedance values across all electrodes.
Upon confirming that the EEG traces are void of noise, instruct the participant to sit so that their eyes are approximately 75 cm away from the screen.
Emphasize that they should read and pay careful attention to the sentences that appear word-by-word on this display, as questions will be asked about their content later on.
To ensure that the participant understands the task, show them ten practice sentences, but do not collect data during this time. Afterwards, start the EEG system to commence continuous recording.
Proceed with the functional task by presenting 120 trials—consisting of 40 congruous, 40 incongruous, and 40 deviant-size sentences—in a random order. Then, repeat this process with an additional set of 120 stimuli to guarantee that enough data are collected.
Once data have been recorded for all 240 stimuli, process it as described in JoVE’s ERPs and the Oddball Task video.
To analyze the data, first plot the average waveforms for the timecourses of congruous, incongruous, and deviant-size stimuli collected from the Pz recording site. On the x-axis of this graph—representing time in ms—indicate when each word in a sentence is shown.
Afterwards, locate the N400 peaks, and for each, calculate its average amplitude—defined as the distance between the lowest point of the peak and the baseline value of 0 µV, also represented by the horizontal axis.
Then, calculate the latency of this component—how long in ms it takes for it to appear in the waveform after the last word in a sentence is shown.
For the ranges of these amplitudes and latencies, proceed to use F-tests to determine whether there is a difference between target and control stimuli.
Notice that the N400 response was only observed after participants were shown the last word of an incongruous sentence, indicating that this electrical event reflects neural processing—particularly involving the parietal lobes—that identify an interruption in sentence processing caused by an incoherent term.
Importantly, although N400 was not observed in waveforms collected using deviant-size stimuli, another unique component—P560, a positive peak with a latency of 560 ms—was.
This indicates that the brain responds differently to unexpected visual stimuli and semantically inconsistent terms, and suggests that N400 is a unique electrical signature of language incongruity.
Now that you know how semantic inconsistency can be used to elicit the N400 component in ERPs, let’s look at other ways researchers are examining this unique electrical signal to study language processing and comprehension.
Some researchers aim to determine when the ability to identify incoherent language develops, and whether this skill changes with age.
Such work has involved showing young children—outfitted with EEG caps—representations of recognizable objects, like a camera.
However, the trick is that when the child looks at this depiction, they're told it’s something different—for example, a cat. Thus, this is a modified version of the semantic incongruity task, as the spoken word doesn’t match the meaning of the visible item.
Measurements of the brain’s electrical responses to these tasks demonstrated that children exhibit an enhanced N400-esque response to incongruous item-word pairs—one that lasts for several hundred ms—compared to congruous sets.
Importantly, this suggests that even at an early age, humans are able to identify and process semantic incongruity.
Other researchers are assessing whether ERPs can be used to better understand language deficits associated with certain personality disorders, such as schizophrenia.
Paradoxically, previous work has shown that individuals with pronounced schizophrenia-like characteristics, such as anxiety or the inability to feel pleasure, demonstrate a heightened N400 response to congruous word pairs—like animal and goat—compared to people with milder symptoms.
However, when these participants were treated with an antipsychotic drug called olanzapine, the amplitude of this congruity-caused N400 component decreased compared to individuals given a placebo, suggesting a possible therapy that could treat the disjointed speech sometimes observed in such disorders.
You’ve just watched JoVE’s video on how congruous and incongruous sentences can be used to investigate language processing. At this point, you should know how to present stimuli to participants, and collect and interpret ERP data. We hope you also now understand how the N400 component is being used to investigate other aspects of language comprehension, such as how it can be affected in behavioral disorders.
Thanks for watching!