15,570 Views
•
07:36 min
•
November 30, 2018
DOI:
This method can help answer key research questions in psycholinguistics regarding the mental processes implicated in language comprehension. The advantage of this technique is that it provides a visual attention index with high temporal resolution, geared to exam the relationship between the visual world and language processing. Individuals new to this method may struggle because the protocol must be executed with a high degree of precision.
Visual demonstration of this method is critical as the proper implementation of the method often requires ad hoc troubleshooting. To determine the ocular dominance, ask the participant to stretch out one arm and to align the thumb with a distant object with both eyes open. Have the participant alternate closing each eye.
The dominant eye is the eye for which the thumb remains aligned with the object when that eye is open. Then, ask the participant to sit at the table with the chin on the chin rest and the forehead against the headrest. Next, click image to display PC.One large and two smaller images of the participant’s eyes should appear.
Use the keyboard arrow keys to select one of the smaller dominant eye images and press A to center the search limits box on the pupil position. A red square should appear around the eye with a turquoise circle near the bottom of the pupil. The pupil itself should be blue.
Confirm the presence of two crosshairs on the screen, one in the center of the pupil and one in the center of the corneal reflection. Adjusting the size of the corneal reflection is crucial. It’s the basis for a good calibration of the eye tracker.
Manually turn the focus lens to adjust the camera as necessary, taking care not to touch the front of the lens until the corneal reflection is as small as possible. Then press A to automatically set the pupil and corneal thresholds on the host computer so that the entire pupil and only the pupil is blue. To calibrate the eye tracker, ask the participant to look at the four corners of the screen one at a time with the camera setup window in view while carefully looking for any irregular reflections that interfere with the corneal reflection when the participant eye gaze is directed at the corner.
If the red box around the eye or either of the crosshairs are not visible at any point, have the participant first look at the center of the screen and then to the problematic corner to determine the source of the issue. Readjust the position of the participant’s head to check if this yields any improvement. Next, inform the participant that the eye tracker will be calibrated and that they are to fixate on the small gray dot within a larger black circle as the circle moves to different parts of the screen.
When the participant is ready, click calibrate to begin the nine point calibration process, pressing enter after the participant has accurately fixated on the first dot in the middle of the screen for an automatic calibration. At the end of the calibration process, an almost rectangular pattern should be visible on the experimenter’s screen that represents the eye gaze patterns of the participant. The results of a good calibration will be highlighted in green.
To validate the results, have the participant repeat the procedure, reminding the participant to look at the dot and to remain still. Click validate and enter to accept each fixation. The results will appear on the host computer monitor.
Note the average and maximal errors. These data represent the degrees to which the tracked image deviates from the actual gaze position of the participant. If the average error is above 0.5 degrees and/or the maximum error is above one degree, have the participant readjust the head position and restart the calibration procedure.
Keeping the maximum error rate below one degree is crucial for high quality data collection. Note that these are the thresholds for tracking the gaze to relatively large objects during spoken language comprehension. The thresholds for eye tracking during reading are even lower.
After a successful completion of the calibration process, click output record to begin the experiment. As one example for tracking eye movements during situated reading, first display two playing cards moving either closer together or farther apart. Then have the participant read a sentence about two concepts that are represented as similar or dissimilar to one another to assess whether the distance between the cards can modulate the comprehension of similarity in language.
As one example for recording eye movements during auditory language comprehension, first show the participant a video of an event of interest. Then show the participant a target scene containing the images of the agent and the objects from the video, and an agent of the opposite gender plus the image of another object. Simultaneously, play a sentence describing an event that either matches or mismatches the previous video to assess the extent to which the gender of an agent’s hands featured in prior events can influence which face from the target scene the participant will anticipate during spoken comprehension.
In this representative reading eye tracking experiment, the mean first pass reading times of similarity adjectives were shorter for similarity sentences after observing two playing cards moving closer together compared to when the cards were moved farther apart. Here, an analysis of the mean log gaze ratios during the verb region of an auditory comprehension test reveal that participants were more likely to inspect the picture of the agent whose gender matched events in a previously shown video when the object and the verb mentioned in the subsequent sentence were congruent with the video and stereotypically matched this agent than when they mismatched on these two dimensions. While executing this procedure, it is important to remember that precision in carrying out each step is crucial for high quality data collection.
In this way, we can obtain insights into the extent to which picture contexts incrementally modulate real time language comprehension. Post-sentence or post-experiment measures, like response times and accuracy, or data from other offline tests can enrich the insights that can be gained from online fixation measures. This technique has enable psycholinguists to investigate the mental representations and real time processes implicated in language comprehension.
The present article reviews an eye-tracking methodology for studies on language comprehension. To obtain reliable data, key steps of the protocol must be followed. Among these are the correct set-up of the eye tracker (e.g., ensuring good quality of the eye and head images) and accurate calibration.
06:49
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Related Videos
26962 Views
09:47
A Method to Quantify Visual Information Processing in Children Using Eye Tracking
Related Videos
17238 Views
09:27
Using Eye Movements Recorded in the Visual World Paradigm to Explore the Online Processing of Spoken Language
Related Videos
9801 Views
05:54
Eye-tracking to Distinguish Comprehension-based and Oculomotor-based Regressive Eye Movements During Reading
Related Videos
6083 Views
06:15
Using the Visual World Paradigm to Study Sentence Comprehension in Mandarin-Speaking Children with Autism
Related Videos
7601 Views
07:09
Gaze in Action: Head-mounted Eye Tracking of Children's Dynamic Visual Attention During Naturalistic Behavior
Related Videos
10493 Views
06:33
Decomposing the Variance in Reading Comprehension to Reveal the Unique and Common Effects of Language and Decoding
Related Videos
6693 Views
07:43
Simultaneous Eye Tracking and Single-Neuron Recordings in Human Epilepsy Patients
Related Videos
7641 Views
06:46
Investigating the Deployment of Visual Attention Before Accurate and Averaging Saccades via Eye Tracking and Assessment of Visual Sensitivity
Related Videos
6965 Views
07:31
Defining the Role Of Language in Infants' Object Categorization with Eye-tracking Paradigms
Related Videos
6485 Views
Read Article
Cite this Article
Rodriguez Ronderos, C., Münster, K., Guerra, E., Kreysa, H., Rodríguez, A., Kröger, J., Kluth, T., Burigo, M., Abashidze, D., Nunnemann, E., Knoeferle, P. Eye Tracking During Visually Situated Language Comprehension: Flexibility and Limitations in Uncovering Visual Context Effects. J. Vis. Exp. (141), e57694, doi:10.3791/57694 (2018).
Copy