A Method to Quantify Visual Information Processing in Children Using Eye Tracking

Behavior
 

Summary

A method is described to quantify the quality of visual information processing based on reflexive eye movements in response to specific visual modalities. Reaction times and fixation output parameters are used to characterize visual performance in children with and without visual impairments from 6 months of age.

Cite this Article

Copy Citation | Download Citations

Kooiker, M. J., Pel, J. J., van der Steen-Kant, S. P., van der Steen, J. A Method to Quantify Visual Information Processing in Children Using Eye Tracking. J. Vis. Exp. (113), e54031, doi:10.3791/54031 (2016).

Abstract

Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.

Introduction

The prevalence of brain damage-related visual problems in children has increased. Because visual problems can have great impact on a child's development, early detection in young infants and children at risk is highly important. At present, visual function tests to assess visual sensory functions such as visual acuity and contrast sensitivity (e.g., optotype tests) are applicable in children from 1-2 years of age1. In younger children these tests are based on structured observations of a child's viewing behavior to visual information. The interpretation of such behavior, i.e., by looking at a child's eye movements, can be hampered by oculomotor or attentional dysfunctions of the child, or even by viewing behavior of the observer. Cerebrally mediated visual functions such as visuospatial memory and object recognition are assessed with visual perception tests (e.g., DTVP2). These tests require verbal instructions and communication and can be used from 4-5 years of age. In view of the postnatal development of the visual system and to take advantage of the high level of plasticity early in life, it is desirable to establish the presence and extent of impairments in visual information processing as early as possible. That way, children with (cerebral) visual impairments may maximally benefit from early intervention, visual stimulation, or supportive strategies. Consequently, there is a need for an assessment method of visual information processing that can be used without verbal communication in children and that is based on quantitative results.

Eye movements are a good model to study visually guided orienting behavior to stimuli3,4, and related perceptual and cognitive functions5. Eye movements indicate the focus of visual attention in scenes, and are known to result either from bottom-up (reflexive, salience-driven) or from top-down (intentional, cognitive) processes6. Eye movements are used to direct the fovea, i.e., sharpness of vision, to new objects. The visual content of an object of interest is processed via pathways that run from the retina via the lateral geniculate nucleus to primary visual cortex (V1), and that distribute themselves over cerebral processing areas (e.g., involved in attention, spatial orientation, recognition, memory, and emotions). Eye movements are both a prerequisite for, and a sequel to visual information processing.

Developments in the measurement of eye movements with infrared eye trackers give the possibility to obtain quantitative parameters of oculomotor and visual function. Automated eye trackers are nowadays omnipresent in medical and psychological research involving healthy and clinical populations. Their purpose is not only to study oculomotor function and attention allocation7, but also to answer questions about behavioral and psychological mechanisms8,9. With the rise of accessible and commercial eye tracking systems, they are increasingly used to test vulnerable populations of infants and children10-12, without constraining conditions, complex instructions, or active cooperation12,13. Due to the close coupling of the oculomotor and visual system on an ocular and cerebral level, eye tracking-based methods are pre-eminently suited to assess visual capacities. So far, besides the measurement of visual acuity14, the use of the technique in assessing visual function in children has received relatively little attention.

Our group has combined eye movement measurements with a preferential looking paradigm13. Preferential looking is the preference to fixate patterned surfaces over homogeneous ones15. This principle is applied by using visual stimuli with a target area in one of four quadrants, which differ from the background in terms of one specific visual feature, e.g. coherent form, coherent motion, contrast and color. These visual features are known to be processed by separate peripheral and central visual pathways. For example, information about form is processed by ventral pathways, from V1 to the temporal cortex. Information about motion is processed by dorsal pathways, from V1 to posterior parietal cortex16. Hence, specific stimuli are used to trigger visual information processing in distinct areas of the visual system. If a child is able to see the specific visual information that is presented, that information will attract visual attention in the form of eye movements. These reflexive eye movement responses to the visual stimuli are recorded with a remote infrared eye tracker. That way, eye movement measures provide a communication-free assessment of the quality of various aspects of visual information processing13.

Eye movements provide not only observational data of a child's viewing behavior11, but can also be used for more objective outcome measures. In combination with a carefully designed test paradigm, eye movements can give precise and objective information on visual information processing. This information is obtained by calculating quantitative parameters based on temporal and spatial properties of eye movement responses. Examples of such parameters are reaction time13, fixation time17, saccade metrics7 or cumulative attention allocation18. The availability of these parameters is new to the field of visual assessment in children at a young developmental stage.

The goal of this paper is to present an eye tracking-based method to measure visual information processing in children from the age of 6 months. The measurement set-up and procedure (i.e. nonverbal paradigm, post-calibration, and mobility) specifically apply to using this method in children at risk. A crucial aspect is the analysis of quantitative visual response parameters, i.e. reaction time, fixation duration, and fixation accuracy. These parameters are used to provide reference areas of visually guided responses in typically developing children, to characterize visual information processing in risk groups of children with visual impairments.

Protocol

The protocol described here was approved by the Medical Ethical Research Committee of the Erasmus Medical Center, Rotterdam, the Netherlands (MEC 2012-097). The procedures adhered to the tenets of the Declaration of Helsinki (2013) for research involving human subjects.

1. Visual Stimuli

  1. Select a set of visual stimuli, i.e., images and movies, to target the processing of basic oculomotor functions and visual processing functions.
  2. Use images and movies to evaluate basic oculomotor functions such as fixation, saccades, smooth pursuit, and optokinetic nystagmus. When abnormalities in oculomotor function are detected, take this into account in data analysis and interpretation.
    1. Use an image to assess fixation and saccades. The present paradigm contains smiley pictures with a radius of 3º of visual angle, which are presented in the left, right, upper and lower half of the monitor.
    2. Use a slowly moving image to assess smooth pursuit. The present paradigm contains movies of smileys which move 16º in sinusoidal horizontal and vertical direction across the monitor, with a velocity of 4º/sec.
    3. Use a movie to assess optokinetic nystagmus reflexes. The present paradigm contains movies of black-and-white sinusoidal gratings that move in leftward and rightward direction.
  3. Use images and movies to assess visual processing functions, e.g., contrast, color, form or motion.
  4. Use a set of visual stimuli that are based on a 4-alternative forced choice preferential looking paradigm (4-AFC PL19). In the present paradigm, the 4 stimulus corners (i.e., upper left and right quadrant, lower left and right quadrant) each represent an alternative choice, i.e., a target area. Each target area has a radius of 6º and differs from the other 3 quadrants with respect to specific visual information, e.g., based on contrast, color, form or motion. The following visual stimuli can be used as an example:
    1. Use an image to assess Form Coherence processing: an image with an array of randomly oriented short white lines (0.2º x 0.6º; density 4.3 lines/degree2) against a black background. In the target area all lines are arranged in the shape of a circle.
    2. Use a movie to assess Local Motion processing: a movie with a black/white patterned square target, with a visual angle of 2.3º, against an equally patterned background, moving 2.5º to the left and to the right in one quadrant at 2.5º/sec.
    3. Use a movie to assess Global Motion processing: an image with an array of white dots (diameter 0.25º, density 2.6 dots/degree2) expanding from the center of the target area towards the borders of the monitor. The dots move over a black background with a velocity of 11.8º/sec and a limited lifetime of 0.4 sec.
    4. Use an image to assess Contrast Detection: an image with a 0% brightness (black) Hiding Heidi picture in the target area, against a 75% (light gray) brightness background.
    5. Use an image to assess Color Detection: an image with a green number 17 in the target area, against a red-yellow background.
    6. Use a movie to assess simultaneous visual processing, e.g., a Cartoon: a colorful, high contrast picture (reproduced with permission from Dick Bruna, Mercis BV, Amsterdam, The Netherlands) with a visual angle of 4.5º x 9.0º (width x height) moving 1.5º up and down at a speed of 3º/sec in the target area, against a black background.
      NOTE: For the purpose of clarity, the representative results of this paper will focus on the highly salient cartoon stimulus that contains various types of visual information (Figure 1). For pictures of the other visual stimuli, please consult a previous study20.

Figure 1
Figure 1. Cartoon stimulus. The cartoon stimulus contains various visual modalities (form, motion, color and contrast). This stimulus triggers visual attention, and gives fastest response times in children. Superimposed is an eye movement (gray), going from the lower left corner of the monitor into the target area in the upper right corner (i.e., a reflexive response to the stimulus). Please click here to view a larger version of this figure.

2. Eye Tracking-based Test Paradigm

  1. Choose an eye tracking system suitable for pediatric populations (e.g., non-invasive, tolerance of head movements, and ease of use)12. This generally entails remote infrared eye trackers (e.g., Tobii T60XL, SMI RED)10,11.
  2. Choose a wide angle size computer monitor to fully display each stimulus (i.e., minimum visual angle of 24º x 30º at 60 cm viewing distance). The remote eye tracker is either integrated with the monitor, or can be attached separately to a monitor.
    NOTE: Remote eye trackers emit infrared light that is sampled using cornea reflection. An eye tracking sampling rate of ~60 Hz is generally sufficient to study patterns of gaze behavior in children.
  3. Assemble a mobile measurement set-up by connecting a monitor and the remote eye tracking system to a laptop or desktop PC.
  4. Install a compatible software program on the PC (e.g., Tobii Studio, iView) for the presentation of visual stimuli and the recording of eye movements.
  5. Design a test sequence containing all stimulus types that are required to test oculomotor functions and/ or visual processing functions (see protocol step 1: visual stimuli). The present example contains all stimulus types that are described in step 1, i.e., 9 in total.
    1. Place the different types of visual stimuli in random order in the test sequence, but make sure that the position of the target area alternates from trial to trial. This ensures the need for making reflex eye movements to the target.
    2. Present each stimulus at least 4 times (i.e.,with the target area at least once in every quadrant), and for at least 4 sec, to allow sufficient time to make an eye movement response. In the present example, the Cartoon stimuli are shown 16 times whereas all other stimuli are shown 4 times. This adds up to a total of 48 stimulus presentations and a total testing time of ~3.5 min.
      NOTE: Repeated presentations increase the chance of sampling sufficient gaze points for each stimulus and each target area in the child's visual field. In general, the availability of gaze data for at least 25% of stimulus presentations is needed to ensure reliable results21.
    3. Make sure testing time per sequence is not longer than ~5 min, because once a test sequence is running, it cannot be paused. It is preferable to make two sequences that can be run in succession, to provide a rest period halfway.
      NOTE: To maximize attention during the test, present audio or audiovisual cues near the monitor in between, but not simultaneously with, the presentation of visual stimuli. Children with visual impairments are particularly more sensitive and responsive to audio cues. Such cues might enhance test attentiveness in this population.
  6. Apply the test sequence(s) in the eye tracker software. First, select the type of stimulus to be added to the timeline of the eye tracker software: image or movie. Next, select the desired stimulus from the folder in which it is located and click 'Add'. Repeat these steps until all stimuli have been added.

3. Running the Eye Tracking Experiment

  1. Attach the eye tracker monitor with a flexible LCD arm to a solid table or wall. Choose an arm that can move in 3 dimensions (i.e., 3 translations, 3 rotations).
  2. Position children at a short distance (generally ~60 cm) from the monitor to ensure efficient pupil tracking of both eyes.
  3. Adjust the monitor position to be perfectly perpendicular to the child's eyes. With an LCD arm this is possible even when the child is lying or sitting in a pram or in a wheelchair.
    NOTE: This set-up allows the assessment of very young and intellectually disabled children, since it does not require a particular body posture, verbal communication or active participation. Certain oculomotor impairments (e.g. nystagmus) are characterized by preference positions of the head in order to compensate for deviant eye positions (e.g., torticollis). The ability to adjust the eye tracker monitor to individual head position enables accurate pupil tracking in this group of children.
  4. Check the quality of pupil reception. This is generally indicated by the presence of two markers representing the child's eyes (e.g., white dots). If the two markers are clearly visible and do not regularly disappear, quality is sufficient. In a separate display, check the distance of the eyes to the monitor (preferably ~60 cm).
    NOTE: Most eye trackers record the gaze position of each eye separately and compensate for free head movements. Pupil signal reception is in general not compromised in children who wear glasses or contact lenses, in children with one or two functioning eyes, or in children with strabismus.
  5. Start the eye tracker software calibration procedure to align the gaze positions with predefined positions on the monitor, prior to start of the measurement. In most eye tracker software packages this calibration procedure consists of the presentation of moving dots in predefined areas of the monitor, which have to be fixated. For children, a version with cartoons or looming dots can be used to improve visual attention.
    NOTE: Although calibration procedures for children have improved significantly, they can still be challenging to perform in young children and children with certain eye- or behavioral disorders.
  6. Check the quality of the pre-set calibration. When the quality of calibration is poor, (e.g., due to excessive head movements, lack of proper fixations, deviant gaze position or deviant head position), no recording can be made. To circumvent this, apply a post-calibration procedure after the recording has been finished, prior to further data analysis (see Discussion section).
  7. Before starting the test recording, activate the 'live viewer': a separate window that shows the child's eye movement responses to the test stimuli by superimposing the gaze signal on the video recording.
  8. Activate a web cam that is directed at the child, to observe and record the child's general behavior during the test. Such a recording provides an overview of the child's visual attention, behavior, fatigue, and environmental conditions.
  9. Prior to starting the test, tell the child he or she will be 'watching television'. No specific instructions are necessary during the test.
  10. During test execution, observe the child's physical behavior and eye movement responses. This can be done by observing behaviorally in real-time, or by observing the recordings made with the web cam.
    1. When the pupil signal disappears during test execution, reposition either the child or the monitor to resume proper pupil detection.
    2. When a child is not paying attention to the monitor, verbally encourage the child to watch the monitor. Do not direct the attention of the child directly to the target area; direct the child's gaze solely to the general location of the eye tracker monitor.
  11. After test execution, replay the gaze recording off-line to observe the gaze responses to the presented stimuli. This is a first step in characterizing the child's visual orienting behavior.
    NOTE: A multitude of parameters are recorded continuously by the eye tracker software during total testing time. Essential parameters that need to be exported to perform the data analysis for the present paradigm are: time stamps, viewing distance between both eyes and the monitor, the position of the left and right eye on the monitor (in x- and y-coordinates), validity of the gaze data, and the timing and position of presented stimuli (i.e., events).
  12. Per subject, export and store the recorded time-based data on eye movement characteristics (gaze data such as viewing distance and gaze positions), and separately the time-based list of presented visual stimuli (event data such as stimulus positions). Make sure to export the two data files as text files and convert them into a data spreadsheet (e.g., save as an Excel file).
    NOTE: The two text files (event data and gaze data) are combined using their corresponding time stamps, and are converted into a set of quantitative parameter values with a self-written software program (see next section). Compared to standard eye tracker analysis software, such parameters provide a more precise and quantitative eye movement analysis, to aim at detailed visual and cognitive processes.

4. Quantitative Analysis of Eye Movements

NOTE: The present protocol is specific to a self-written software program. In order to replicate it, one should write such a software program, e.g., in MATLAB or Python, to quantify the child's visual orienting behavior. In the software program, the following steps are performed for every stimulus type. The present example is focused on Cartoon; the same protocol is applicable to other stimulus types.

  1. Post-calibrate the Gaze Data
    1. Open MATLAB. Select the stimulus to analyze the gaze data, by typing in '1' next to the stimulus of choice.
    2. Press Run. In the appearing pop-up menu, select the option 'Post-calibrate the data'. A list with gaze data files per subject appears. Select gaze data of one subject and press 'Open'.
    3. From the next pop-up menu, select which eye(s) to analyze: Left, Right, or Both. The program now generates a scatter plot of all recorded gaze positions and target positions, over the total stimulus presentation time.
    4. Check whether gaze positions correctly overlap with the corresponding target positions. If this calibration is correct, press 'Yes'. Otherwise, press 'No'. This will start the option to perform a post-calibration.
    5. Translate the center of gaze points to the center of the monitor, by clicking once on the center of gaze points. This center point is located exactly in the middle of the vertical- and horizontal axes.
    6. Scale the gaze positions to the corresponding target positions by clicking the center of gaze points in each of the four target areas once (i.e., the 4 quadrants).
    7. Check again whether gaze positions correctly overlap with the corresponding target positions. If this is the case, indicate in the next pop-up menu that calibration has been performed correctly, by pressing 'Yes', after which the calibrated gaze data is saved. Otherwise, press 'No', after which post-calibration starts again from step 4.1.5.
      NOTE: After post-calibration, multiple gaze responses are available per stimulus type and per subject. These can be used to calculate quantitative parameters of visual processing. Prior to calculation of these parameters, verify that the gaze responses were made to the target area (i.e., that the specific stimulus has been seen by the child).
  2. Determine Whether the Stimulus has been Seen
    1. Per stimulus presentation of each subject, the corresponding gaze data that was recorded during total presentation time is visualized in a graph (Figure 2). Verify whether this stimulus has been seen, by checking the criteria that are stated in Table 1, and that are visualized in Figure 2. If the eye movement response adheres to the criteria, i.e., the stimulus can be classified as seen, click 'Accept' in the pop-up menu. If the eye movement response is not in accordance with the criteria, click 'Reject'.
    2. Simultaneously, plot all fixation points belonging to the presented stimulus and the corresponding target area (i.e., quadrant) in a second graph. Inspect visually whether the fixation points are located in the correct quadrant.
    3. Continue with the subsequent stimulus presentation, and perform steps 4.2.1 and 4.2.2 for all available eye movement responses. After manually checking the eye movement responses, the software program calculates three outcome parameters: RTF, FD, and GFA (Figure 3).

Figure 2
Figure 2. Eye movement response to the target area of a stimulus. One eye movement trace (horizontal and vertical directions combined) in distance from the center of the target area (in degrees, y-axis) over stimulus presentation time (in ms, x-axis). The dotted line represents the border of the target area (6° radius). Letters indicate criteria to establish whether the stimulus has been seen: (A) Gaze signal in the first 500 msec; (B) Gaze was not in the target area before 120 msec; (C) Gaze inside the target area for ≥200 msec. Note that in this figure, the depicted presentation time is max 2,000 msec to visualize the first, reflexive response. During testing, total presentation time of all stimuli was 4,000 msec. Please click here to view a larger version of this figure.

Criterium (Figure 2) Verify that the gaze signal:  Rationale:
A Has been recorded for ≥500 msec after stimulus onset Capture reflex orienting responses
B Did not enter the target area <120 msec after stimulus onset, and was not already inside the target at the start of stimulus presentation Exclude correct performance based on chance 
C Was in the target area for ≥200 msec Ensure fixation on the target
D Entered the target area within a time window of 1,500 msec, and less than 4 saccades were made  Exclude visual search behavior

Table 1: Criteria to establish whether a stimulus has been seen. Criteria A, B, and C are visualized in Figure 2.

Figure 3
Figure 3. Visualization of the quantitative parameters RTF, FD, and GFA. One eye movement trace in distance from the center of the target area (in degrees, y-axis) over stimulus presentation time (in msec, x-axis). The vertical red line represents the time at which gaze entered the target area; i.e., Reaction Time to Fixation (RTF). The horizontal red line represents the total time gaze was fixated on the target area; i.e., Fixation Duration (FD). The vertical red arrow represents the width of the fixation trace, in degrees of visual angle, i.e., Gaze Fixation Area (GFA). Please click here to view a larger version of this figure.

Representative Results

The presented method has been applied in two populations of children: a control group of 337 children without visual impairments (mean age (SD) = 4.8 (3.3) years), and a group of 119 children with visual impairments (mean age (SD) = 8.10 (2.96) years) who were recruited at a visual rehabilitation center (Royal Dutch Visio, the Netherlands). Of these children, 74 had ocular visual impairment and 45 had cerebral visual impairments. The results of all control children are visualized in Figures 4-6, separately for reaction time, fixation duration, and gaze fixation area. Reference limits (indicated by black lines) were constructed by fitting a logarithmic function to the control data based on age. These figures serve as a basis for characterizing visual processing functions in children with visual impairments, in terms of impaired or intact function.

The parameter reaction time to fixation (RTF) differentiates between children with- and without visual impairments, and between distinct types of visual impairments. RTF is a measure for the time that is needed to process visual information and execute an eye movement (for calculations refer to a previous study13). The lower the RTF value, the faster the eye movement response. Good repeatability of RTF has been shown in a group of typically developing children from 0-12 years13,21,22, and in children with various types of visual impairments21. Figure 4 shows average RTF to the dynamic Cartoon stimulus over age, for control children, children with cerebral visual impairment (CVI) and children with ocular visual impairment (OVI). RTF values are significantly higher in children with- compared to children without visual impairments (mean difference = 85 msec; t = -13.91, p <0.001, Cohen's d = 1.32) and in children with CVI compared to OVI (mean difference = 99 msec; t = -6.90, p <0.001, Cohen's d = 1.25). These results confirm previously published findings on RTF in subgroups of the present dataset20,24,25.

Figure 4
Figure 4. Average RTF in children with- and without visual impairments. Average RTF values in ms (y-axis) per child, over age (x-axis). Values are shown separately for control children (open circles), children with OVI (black circles), and children with CVI (crosses). The black line represents the upper reference limit of RTF in the control group. RTF values above this line are regarded as deviant, i.e. long reaction times. Please click here to view a larger version of this figure.

Fixation duration is the total amount of time that gaze was fixated within the target area. FD is a measure for sustained visual attention, and is dependent on stimulus presentation time, which is 4 sec in the present example. The parameter fixation duration (FD) also differentiates between children with and without different types of visual impairments. Figure 5 shows mean FD over age, separately for control children, children with CVI, and children with OVI. FD is significantly shorter in children with- than in children without visual impairments (mean difference = 850 msec; t = 11.72, p <0.001, Cohen's d = -1.12), and significantly shorter in children with CVI than in children with OVI (mean difference = 325 msec; t = 2.44, p <0.05, Cohen's d = -0.50). This confirms previous results in children with-, compared to children without visual impairments (Kooiker MJG et al., submitted).

Figure 5
Figure 5. Average FD in children with- and without visual impairments. Average FD values in ms (y-axis) per child, over age (x-axis). Values are shown separately for control children (open circles), children with OVI (black circles), and children with CVI (crosses). The black line represents the lower reference limit of FD in the control group. FD values below this line are regarded as deviant, i.e. short fixation duration. Please click here to view a larger version of this figure.

The parameter gaze fixation area (GFA) is sensitive to detect disturbances in oculomotor control, in particular nystagmus. GFA represents the size of the area of fixation in degrees, and is a measure for fixation accuracy (for calculations see previous studies13,23). A small area of fixation indicates high fixation accuracy. GFA depends on the size of the stimulus and the corresponding target area (i.e., a 6º radius in the present example). Good repeatability of GFA has been shown in a group of typically developing children from 0-12 years13, 21, and in children with various types of visual impairments21. Figure 6 shows mean GFA in response to the cartoon stimulus over age, separately for control children, children with the oculomotor impairment nystagmus, and children with visual impairments but without nystagmus. GFA values are significantly larger, i.e. lower fixation accuracy, in children with- compared to children without visual impairments (mean difference = 1.34º; t = -25.09, p <0.001, Cohen's d = 2.37). In addition, children with nystagmus have lower fixation accuracy than children without nystagmus but with other types of visual impairment (mean difference = 0.71º; t = 5.03, p <0.001; Cohen's d = 1.04). This is consistent with previously published findings on GFA in subgroups of the present dataset20,24,25.

Figure 6
Figure 6. Average GFA in children with and without visual impairments. Average GFA values in degrees (y-axis) per child, over age (x-axis). Values are shown separately for control children (open circles), children with visual impairment and nystagmus (asterisk), and children with visual impairment without nystagmus (black diamond). The black line represents the upper reference limit of GFA in the control group. GFA values above this line are regarded as deviant, i.e. low fixation accuracy. Please click here to view a larger version of this figure.

Discussion

The presented measurement set-up combined with quantitative eye movement analysis provides a distinct characterization of visual processing functions in various groups of children with oculomotor and visual impairments. The key feature of this paradigm is that performance is based on eye movement responses to visual stimuli that are triggered in a reflexive manner. No specific verbal instructions are given and there is no need for children to verbally respond. The parameters RTF, GFA and FD show significant differences between groups of typically developing- and visually impaired children, despite the limited spread of parameter values that exists in each group (Figures 4-6). Thus, depending on the evaluated parameter, some typically developing children can show deviant performance, while some children with visual impairments show 'normal' performance. Ultimately, multiple outcome measures in response to multiple visual modalities should be considered on an individual level. A summary of all outcome measures provides a unique characterization of visual information processing capacities, which may be converted in a visual profile in children from 6 months of age.

Several studies have shown the value of remote eye tracking in vulnerable populations of children, to infer attentional or psychological capacities9,12,18. Whereas most studies rely on behavioral observations and the use of instructions, a distinct feature of the current paradigm is the nonverbal, quantitative approach. Critical steps within the protocol therefore include the stimuli that are based on preferential looking, the mobile measurement set-up, and the custom calibration and analysis software. The presented extension of observation-based results with elaborate analysis methods provides standardized and detailed results on visual processing functions. This is in line with work on the assessment of infant visual acuity with an eye tracker14, and work on gaze control in various disorders7. The method is flexible and enables mobile assessment which is indispensable when performing clinical assessments in young children or children with multiple disabilities. Therefore, it is suited to measure oculomotor and visual processing capacities in virtually all children that are capable of watching a monitor.

The significance of this method with respect to existing visual diagnostic methods (i.e., validity) has been studied as a first step toward clinical implementation. The present paradigm was combined with currently used visual function assessment (VFA) in children. Observations of oculomotor and visual functions that are based on eye movement recordings were comparable with standard behavioral observations of these functions. Moreover, eye tracking parameters, e.g., fixation duration and saccadic direction, provided additional value in characterizing oculomotor and visual performance in children during VFA (Kooiker MJG et. al., 2015, submitted). The major gain of the presented method lies in the possibility to assess more visual functions than is currently done in visual function assessments at a young age, and to assess them in a quantitative manner26. A limitation with respect to existing methods is that, without adaptations, it is not yet possible to thoroughly assess visual acuity or visual field with the present test battery14.

Although we limited ourselves to the presentation of results from cartoon stimuli, in future applications different visual modalities can be tested using other stimuli (e.g., distinct forms, motion, color and contrast information)22,20,25. That way, specific visual processing areas beyond the primary visual pathways are targeted, such as visual association areas in temporal or parietal cortex. A limitation of the method is that the present visual stimuli merely trigger the detection of visual input, and invoke the initial stage of visual processing. These stimuli do not target higher-order functions that become relevant after stimulus detection and that are normally measured with visual perception tests. Although their execution without the use of communication is challenging, an eye tracking-based paradigm is a promising future format for the detection of perception-related information, e.g. visual search, -memory or -selective attention.

In sum, detailed eye movement responses to various types of visual stimulation provide a comprehensive characterization of visual information processing functions, early in development. Consequently, for each child an individual visual profile in terms of intact and impaired functions can be created. Such a profile may provide detailed information about strengths and weaknesses in oculomotor and visual function. It may be used as a starting point for support in daily life, and for teacher and caregiver education. The quantitative information that has become available with this method can be advantageous for following visual development over time, and for monitoring visual interventions and rehabilitation programs.

Disclosures

The authors declare that they have no competing financial interest.

Acknowledgements

The authors thank daycare centers (Wasko, Alblasserwaard) for their support in recruiting the control group, and Mark Vonk for his help in data collection in the control group. The authors also thank the children from the control group and the children who are clients from Royal Dutch Visio for participation in the study. The authors are grateful to the children and their parents for participation in the video.

The development of the method was supported by a grant from the Novum Foundation: a non-profit organization providing financial support to (research) projects that improve the quality of life of individuals with a visual impairment (www.stichtingnovum.org). Financial support for the current study was provided by 'ZonMw Inzicht' (Netherlands Organization for Health Research and Development-Insight Society), grant number: 60-00635-98-10.

Materials

Name Company Catalog Number Comments
Tobii T60 XL Tobii Technology: http://www.tobii.com  http://www.tobii.com/en/eye-tracking-research/global/products/hardware/tobii-t60xl-eye-tracker/ remote infrared eye tracker 
Tobii Studio Tobii Technology: http://www.tobii.com  http://www.tobii.com/en/eye-tracking-research/global/products/software/tobii-studio-analysis-software/ eye tracker software
MATLAB MathWorks Inc http://nl.mathworks.com/products/matlab/ data analysis software

DOWNLOAD MATERIALS LIST

References

  1. Hyvärinen, L. Considerations in evaluation and treatment of the child with low vision. Am. J. Occup. Ther. 49, (9), 891-897 (1995).
  2. Hamill, D. D., Pearson, N. A., Voress, J. K. Developmental Test of Visual Perception. 2nd edn, Pro-Ed. Austin, TX. (1993).
  3. Yarbus, A. L. Eye movements and vision. Plenum Press. New York. (1967).
  4. Noton, D., Stark, L. Scanpaths in eye movements during pattern perception. Science. 171, (3968), 308-311 (1971).
  5. Liversedge, S. P., Findlay, J. M. Saccadic eye movements and cognition. Trends Cogn. Sci. 4, (1), 6-14 (2000).
  6. Corbetta, M., Shulman, G. L. Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci. 3, (3), 201-215 (2002).
  7. Tseng, P. H., et al. High-throughput classification of clinical populations from natural viewing eye movements. J. Neurol. 260, (1), 275-284 (2013).
  8. Karatekin, C. Eye tracking studies of normative and atypical development. Dev. Rev. 27, (3), 283-348 (2007).
  9. Rommelse, N. N., Vander Stigchel, S., Sergeant, J. A. A review on eye movement studies in childhood and adolescent psychiatry. Brain Cogn. 68, (3), 391-414 (2008).
  10. Gredebäck, G., Johnson, S., von Hofsten, C. Eye tracking in infancy research. Dev. Neuropsychol. 35, (1), 1-19 (2010).
  11. Aslin, R. N., McMurray, B. Automated corneal-reflection eye tracking in infancy: methodological developments and applications to cognition. Infancy. 6, (2), 155-163 (2004).
  12. Sasson, N. J., Elison, J. T. Eye tracking young children with autism. J. Vis. Exp. (61), e3675 (2012).
  13. Pel, J. J., Manders, J. C., van der Steen, J. Assessment of visual orienting behaviour in young children using remote eye tracking: methodology and reliability. J. Neurosci. Meth. 189, (2), 252-256 (2010).
  14. Jones, P. R., Kalwarowsky, S., Atkinson, J., Braddick, O. J., Nardini, M. Automated measurement of resolution acuity in infants using remote eye-tracking. Invest. Ophth. Vis. Sci. 55, (12), 8102-8110 (2014).
  15. Fantz, R. L. Visual perception from birth as shown by pattern selectivity. Ann. N. Y. Acad. Sci. 118, (21), 793-814 (1965).
  16. Wattam-Bell, J., et al. Reorganization of global form and motion processing during human visual development. Curr. Biol. 20, (5), 411-415 (2010).
  17. Falck-Ytter, T., von Hofsten, C., Gillberg, C., Fernell, E. Visualization and analysis of eye movement data from children with typical and atypical development. J. Autism. Dev. Disord. 43, (10), 2249-2258 (2013).
  18. Ahtola, E., et al. Dynamic eye tracking based metrics for infant gaze patterns in the face-distractor competition paradigm. Plos One. 9, (5), e97299 (2014).
  19. Jäkel, F., Wichmann, F. A. Spatial four-alternative forced-choice method is the preferred psychophysical method for naive observers. J. Vision. 6, (11), 1307-1322 (2006).
  20. Pel, J. J., et al. Orienting responses to various visual stimuli in children with visual processing impairments or infantile nystagmus syndrome. J. Child Neurol. 29, (12), 1632-1637 (2013).
  21. Kooiker, M. J., van der Steen, J., Pel, J. J. Reliability of visual orienting response measures in children with and without visual impairments. J. Neurosci. Meth. 233, 54-62 (2014).
  22. Boot, F. H., Pel, J. J., Evenhuis, H. M., van der Steen, J. Quantification of visual orienting responses to coherent form and motion in typically developing children aged 0-12 years. Invest. Ophth. Vis. Sci. 53, (6), 2708-2714 (2012).
  23. Oliveira, L. F., Simpson, D. M., Nadal, J. Calculation of area of stabilometric signals using principal component analysis. Physiol. Meas. 17, (4), 305-312 (1996).
  24. Pel, J., et al. Effects of visual processing and congenital nystagmus on visually guided ocular motor behaviour. Dev. Med. Child Neurol. 53, (4), 344-349 (2011).
  25. Kooiker, M. J., Pel, J. J., van der Steen, J. The relationship between visual orienting responses and clinical characteristics in children attending special education for the visually impaired. J. Child Neurol. 30, (6), 690-697 (2014).
  26. Ricci, D., et al. Early assessment of visual function in full term newborns. Early Hum. Dev. 84, (2), 107-113 (2008).

Comments

0 Comments


    Post a Question / Comment / Request

    You must be signed in to post a comment. Please or create an account.

    Usage Statistics