Waiting
Login processing...

Trial ends in Request Full Access Tell Your Colleague About Jove

Behavior

Using the Visual World Paradigm to Study Sentence Comprehension in Mandarin-Speaking Children with Autism

Published: October 3, 2018 doi: 10.3791/58452
* These authors contributed equally

Summary

We present a protocol to examine the use of morphological cues during real-time sentence comprehension by children with autism.

Abstract

Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information. However, there is currently a paucity of research exploring how preschool children with autism understand sentences using different types of cues. The mechanisms underlying sentence comprehension remains largely unclear. The present study presents a protocol to examine the sentence comprehension abilities of preschool children with autism. More specifically, a visual world paradigm of eye-tracking is used to explore the moment-to-moment sentence comprehension in the children. The paradigm has multiple advantages. First, it is sensitive to the time course of sentence comprehension and thus can provide rich information about how sentence comprehension unfolds over time. Second, it requires minimal task and communication demands, so it is ideal for testing children with autism. To further minimize the computational burden of children, the present study measures eye movements that arise as automatic responses to linguistic input rather than measuring eye movements that accompany conscious responses to spoken instructions.

Introduction

Sentence comprehension relies on the ability to rapidly integrate different types of linguistic and non-linguistic information1,2,3,4,5,6,7,8,9,10,11. Prior research has found that young typically developing (TD) children incrementally compute the meaning of a sentence using both linguistic and non-linguistic cues12,13,14,15,16,17,18,19. However, there is currently a paucity of research exploring how preschool children with autism understand a sentence using different types of cues. The mechanisms underlying their sentence comprehension remains largely unclear.

It is generally acknowledged that there is enormous variability in the language abilities of children with autism, especially in their expressive language; for example, some children with autism have relatively good structural language, some exhibit deficits in both lexical and grammatical domains, some demonstrate impaired grammar, and some never acquire functional spoken language20,21,22,23,24,25. In addition, prior research seems to suggest that their receptive language is relatively more impaired than their expressive language26,27,28,29. Most research that has assessed sentence comprehension abilities of children with autism have used offline tasks (e.g., standardized tests, caregiver reports), and the findings suggest that their sentence comprehension abilities might be particularly impaired30,31,32,33,34,35,36,37. However, it has been pointed out that poor comprehension abilities are more likely related to these children's overall lack of social responsiveness than to language processing deficits38,39. Note that these offline tasks used in previous research often require high response demands or interactions with the experimenters, which might pose particular difficulties for children with autism, because they often exhibit various challenging behaviors or symptoms. As a result, this may interact with the high task and communication demands and mask their comprehension abilities [for an overview of methods for assessing receptive language in children with autism, see Kasari et al. (2013)27 and Plesa-Skwerer et al. (2016)29]. Thus, experimental paradigms that can better control these confounding factors are required to further understand the nature of sentence-processing mechanisms in autism.

In the current study, we present an eye-tracking paradigm that can directly and effectively assess sentence comprehension abilities of children with autism. Compared to offline tasks, eye-tracking is a more sensitive testing paradigm to demonstrate children's comprehension abilities. It is sensitive to the time course of the comprehension process and requires no explicit motor or language responses from the participant, making it a promising method to study younger children and minimally verbal children with autism. In addition, we record eye movements as automatic responses to linguistic input instead of measuring eye movements that accompany conscious responses to linguistic input.

Subscription Required. Please recommend JoVE to your librarian.

Protocol

This study has been approved by the Ethics Committee of the School of Medicine at Tsinghua University. Informed consent has been obtained from all individual participants included in the study.

1. Participant Screening and Study Preparation

  1. Recruit Mandarin-speaking preschool children with autism.
    NOTE: Their diagnoses should be confirmed by pediatric neurologists at hospitals using DSM-IV-TR40 or DSM-541 and, ideally, the number of participants should be no less than 15. The present study recruited 25 participants with confirmed diagnoses.
  2. Evaluate each participant independently using gold-standard diagnostic instruments like the Autism Diagnostic Observation Schedule42.
  3. Measure the verbal IQ of participants using the Wechsler Preschool and Primary Scale of Intelligence-IV (CN), a standardized IQ test designed for Mandarin-speaking children between the ages of 2-6 and 6-1143.
    NOTE: The verbal IQ scores of the children with autism in the present study were all above 80. They were all high-functioning children with autism.
  4. Calculate each participant's mean length of utterance (MLU) by dividing the total number of words by the number of utterances in each speech sample. Record 100 utterances for each participant from either their interactions with parents or with teachers. Then, calculate the MLU by dividing the total number of words in each participant's utterances by 100.
    NOTE: MLU indicates the participant's sentence complexity levels.
  5. Recruit TD children. Ideally match the TD children to the children with autism for age (TD group 1), MLU (TD group 2), and verbal IQ (TD group 3).
    NOTE: The present study recruited 50 TD children (25 boys and 25 girls) from local kindergartens. 25 matched to the children with autism for age and 25 matched to the children with autism for both MLU and verbal IQ.

2. Warm-up Session

  1. Invite the participants for a warm-up session before the actual test. Introduce the participant to the research environment and interact with him or her to establish a good rapport.
    NOTE: This can be done on the same day as the testing session or organized on a different day. In the warm-up session, two experimenters are typically involved and interact with the participant using toys and props.

3. Conditions and Experimental Design

  1. Construct the test stimuli. Create 12 target items, each comprised of a visual stimulus, and two spoken sentences containing the morphological markers BA and BEI, respectively. Construct the spoken sentences using the same structure: morphological marker + noun phrase (NP) + adverb + verb phrase (VP) (see Examples 1a and 1b below).
    NOTE: The marker BA indicates that the following NP is the recipient of the holding event (see 2a), and BEI indicates that the following NP is the initiator of the event (see 2b). The subject NP of a sentence in Mandarin can often be omitted when the referent of the NP is contextually available.

    Example:
    (1) a. BA shizi qingqingdi bao-le      qilai.
             BA  lion   gently       hold         up
             Meaning: Someone gently holds the lion.
         b. BEI shizi qingqingdi bao-le      qilai.
             BEI lion   gently       hold          up
             Meaning: Someone is gently held by the lion.
    (2) a. BA + [NP]Recipient
         b. BEI + [NP]Initiator
    1. Use Pixelmator (or another image editor) to create visual images. Open Pixelmator. Click on the Pixelmator icon. Create a visual image from a template. Click Show Details in the template chooser. Double-click the template to open it. Adjust the width, height, resolution, and color depth from the pop-up menus. Enter the relevant parameters. Click OK.
    2. Use Praat (or another audio editor) to construct spoken sentences. Set up the microphone. Open Praat. Click on the Praat icon. Select Record Mono sound from the New menu. Set the recording conditions by clicking the sample rate option of 44100. Click the Record button.
    3. Record the spoken sentences by asking a native Beijing Mandarin-speaker to produce the sentences in a child-directed manner. Save the recordings by clicking Save.
      NOTE: Typically, 12 to 16 target items are constructed for sentence comprehension studies with children. Test stimuli can be created using other image and audio editors for a visual world study.
  2. Construct visual images, each containing two pictures. The two pictures depict the same event involving the same characters. Reverse the event roles (initiator or recipient) of the two characters in the two pictures. Make one picture compatible with the construction containing BA (BA-target event) and one with the construction containing BEI (BEI-target event). An example is provided in Figure 1.
    NOTE: This figure has been reprinted with permission from Zhou and Ma (2018)19.
  3. Counterbalance and randomization: divide the target trials into two experimental lists, with a participant seeing each visual stimulus but listening to only one of the recorded sentences for the stimulus. Counterbalance the spoken sentences containing BA and BEI across the two experimental lists, with 6 constructions containing BA and 6 containing BEI. Add 12 filler items to each experimental list and arrange the target and filler trials in a random order. Randomly assign the participants to the two lists.

4. Experimental Procedure

  1. Eye-tracking procedure.
    1. Invite the participants to sit comfortably in front of the display monitor of the remote eye tracker. Set the distance between the participants' eyes and the monitor around 60 cm. Perform the standard calibration and validation procedures by asking the participants to fixate on a grid of five fixation targets in random succession.
    2. Present the participants with a spoken sentence while they are seeing a visual image, as done in the standard visual world paradigm10,44. Use the monocular eye-tracking option by tracking the eye that is on the same side as the illuminator of the eye tracker. Record the participant's eye movements using the eye tracker.
      NOTE: The eye tracker used in the present study allows remote eye-tracking with a sampling rate of 500 Hz.
  2. Testing and measuring.
    1. Test the participants individually. Simply tell the participants to listen to the spoken sentences while they are viewing the pictures. Ask one experimenter to monitor the participant on the computer and one to stand behind the participant and gently rest her hands on the participant's shoulders to minimize the participant's sudden movements.
    2. Measure the participant's eye movements that arise as automatic responses to the linguistic input using the eye tracker.
      NOTE: The task does not ask participants to make any conscious judgments about the spoken sentences to minimize their computational burden. The eye tracker automatically records the eye movements.
    3. Monitoring during the test: use the live viewer mode on the computer screen, exhibited by the eye tracker during the test, to observe the participant's looking behavior. Ask the experimenter who monitors data collection via the live viewer mode to signal to the experimenter who stands behind the participant to reorient the participant if his or her eye gaze wanders off the computer screen.

5. Data Treatment and Analysis

  1. Code the participants' fixations in two interest areas. Use Data Viewer to draw the two interest areas: BA-target event area and BEI-target event area (see Figure 1). Open Data Viewer. Select one of the interest area shape icons on the tool bar. Use the mouse to drag a box around the region you want to define as an interest area. Save the interest area in the Interest Area Set folder. Apply the interest area to other visual images.
    NOTE: The depicted event in the upper panel of Figure 1 matches the BA-construction, hence the BA-target event, and the event depicted in the lower panel matches Figure 1b, hence the BEI-target event. The software used for data coding is Data Viewer, which comes with the eye tracker used in the study. Other data analysis software is also available.
  2. Analyze the eye gaze patterns using Data Viewer.
    1. Open Data Viewer. Choose the sample report function from the menu to set the time windows for analysis (e.g., every 200 ms for the time window in the present study). Use the same function to time lock the fixation proportions in the interest areas to the onset of the marker for each trial. Export the raw data into an excel file using the export function from the menu.
    2. Use the excel functions to average the fixation proportions following the onset of the marker for each area. Use the excel functions to compute the fixation proportions in each time window of 200 ms over a period of 5200 ms (the mean length of the target sentences + 200 ms) from the onset of the marker for the two areas. Apply linear mixed-effects models to the eye movement data, detailed in Representative Results below.
      NOTE: The use of 200 ms as a time window is based on the standard procedure for analyzing child eye gaze data in the literature12,13,18,19,45,46,47, and it is generally assumed that it takes about 200 ms to observe the effects of linguistic markers on eye movements48.

Subscription Required. Please recommend JoVE to your librarian.

Representative Results

The present study uses minimal pairs as in Examples 1a and 1b to investigate if and how fast children with autism can use event information encoded in two morphological markers during real-time sentence comprehension. It was predicted that if they are able to rapidly and effectively use event information in the two markers during real-time sentence comprehension, then they should look more at the BA-target event when hearing BA than when hearing BEI. Also, they should fixate more on the BEI-target event after listening to BEI than after listening to BA.

The comparison between 5-year-olds with autism and their age-matched TD peers is presented in the representative results. Figure 2 shows the average fixation proportions of the TD 5-year-olds on the BA-target event (Panel A) and BEI-target event (Panel B) in the two conditions. Figure 3 summarizes the average fixation proportions of the 5-year-olds with autism.

The figures show that the autism group displayed eye movement patterns similar to the age-matched TD group. Both groups exhibited more fixations on the BA-target event when hearing BA than when hearing BEI, occurring after onset of the object NP and before onset of the adverb. To be specific, the effect occurred in the TD group during the window between 1400 and 1600 ms (Figure 2), whereas the effect occurred in the autism group during the window between 1800 and 2000 ms (Figure 3). By contrast, an opposite eye movement pattern was found in the BEI-target event for both groups: more fixations on the BEI-target event were observed when hearing BEI than when hearing BA, again occurring after onset of the object NP and prior to onset of the adverb.

Fixation proportions were then transformed using the empirical logit formula49: probability = ln[(y+0.5)/(n-y+0.5)], where y is the number of fixations on the areas of interest during a particular temporal bin and n is the total number of fixations in that temporal bin. Linear mixed-effects models were then fitted to the transformed data. Statistical models were computed for the two groups separately based on their fixations in the two interest areas in the critical time windows, where time and marker type (BA versus BEI) were treated as fixed effects. Random intercepts and slopes were included for both participants and items50. The fitting process was conducted via functions lmer from package lme4 (v1.1-12)51 of the R (v3.2.5) software environment52. A Wald test was then used to compute p-values for each fixed effect.

The model results for the TD 5-year-olds in the two interest areas: in the BA-target event area, hearing BA caused the TD children to look significantly at this event more than when hearing BEI (β= 0.54, p < .001). In addition, there was a significant interaction between marker type and time (β= 0.33, p < .001), indicating that the probability of fixating on the BA-target event increased over time after the onset of BA. However, the TD children exhibited an opposite eye movement pattern in the BEI-target event area. Hearing BEI triggered more fixations on the BEI-target event than hearing BA (β= -0.60, p < .001). Again, there was a significant interaction between marker type and time (β= -0.21, p < .001), suggesting that the TD group's tendency to look at the BEI-target event declined over time after the onset of BA.

The model results for the 5-year-olds with autism in the two interest areas: the autism group showed similar eye movement patterns. Hearing BA triggered more fixations on the BA-target event than hearing BEI (β= 0.50, p < .001). Hearing BEI triggered more looks at the BEI-target event than hearing BA (β= -0.54, p < .001). Like the TD group, the autism group exhibited significant interactions in both interest areas. In both the BA-target and the BEI-target event areas, the children with autism displayed a significant interaction between marker type and time (β= 0.15, p < .01 in the BA-target event area; β= -0.16, p < .01 in the BEI-target event area).

Overall, the eye patterns exhibited by the 5-year-olds with autism provide evidence that they were able to use the event information encoded in two morphological markers rapidly and effectively during real-time sentence comprehension. The results show that the recorded eye movements as automatic responses to the linguistic input are sensitive measures of sentence comprehension abilities in both TD children and children with autism.

Figure 1
Figure 1: Example visual image. (A) Indicates a BA-target event. (B) Represents a BEI-target event. This figure has been reprinted with permission from Zhou and Ma (2018)19. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Average fixation proportions from marker onset in both conditions in TD 5-year-olds. (A) Shows the fixation proportions on the BA-target event. (B) Illustrates the fixation proportions on the BEI-target event. Please click here to view a larger version of this figure.

Figure 3
Figure 3Average fixation proportions from marker onset in both conditions in 5-year-olds with autism. (A) Shows the fixation proportions on the BA-target event. (B) Illustrates the fixation proportions on the BEI-target event. Please click here to view a larger version of this figure.

Subscription Required. Please recommend JoVE to your librarian.

Discussion

In the current study, we present an eye-tracking paradigm that can directly and effectively assess the sentence comprehension abilities of children with autism. We found that 5-year-old children with autism, like their age-matched TD peers, exhibited eye gaze patterns that reflect effective and rapid use of linguistic cues during real-time sentence comprehension.

The findings provide evidence that eye-tracking (in particular, the visual world paradigm) is a sensitive measure of real-time sentence comprehension in children with autism. Compared to offline methods, the paradigm has several advantages. First, it is sensitive to the time course of sentence comprehension. Second, it minimizes the task and communication demands involved, thus is a better-suited method that can be used with children exhibiting challenging behavioral features. Third, it simply records eye movements as automatic responses to linguistic input without asking participants to provide conscious judgments about the input, significantly reducing the computational burden of the participants.

The visual world paradigm is based on a linking assumption that eye movements in the visual world are synchronized to the real-time processing of concurrent linguistic stimuli. Thus, an effective language comprehension study using the visual world paradigm requires a close mapping between eye gaze patterns in the visual world and referential processing of the spoken language. To ensure a close mapping between the two, it is important to first design the visual stimuli in a way so that eye movements in the visual images reflect only the processes underlying comprehension of the spoken language, and that other factors that may affect participants' eye movements are well-controlled. Second, it is important to time-lock participants' eye movements to the onset of a critical linguistic marker in the spoken language, and to make sure that each element and boundaries of the spoken language elements can be clearly identified for later analyses.

The visual world paradigm has been used successfully to test TD children's language abilities. The present study explored the potential of conducting visual world studies on language comprehension in preschool children with autism. As discussed, these findings provide evidence for the validity and sensitivity of the paradigm in testing linguistic knowledge in children with autism. The findings also invite us to rethink questions surrounding the language comprehension abilities of children with autism. As discussed, previous research seems to suggest that the sentence comprehension abilities of children with autism might be severely impaired; however, as noted by Kasari et al.27 and Plesa-Skwerer et al.29, it is often difficult to evaluate the comprehension abilities of children with autism using traditional methods like standardized tests or other off-line tasks, because these tasks require high response demands or interactions with the experimenters; as a result, this might pose particular difficulties for children with autism. Using the visual world paradigm, the present study shows for the first time that when minimal task and communication demands are involved, young children with autism are able to use linguistic cues effectively and rapidly during real-time sentence comprehension. Their sentence comprehension abilities are far better than has been suggested by previous research. The findings also provide evidence that poor comprehension performance of children with autism in past research is perhaps due to a lack of social responsiveness and the high task and communication demands involved in these traditional tasks.

The visual world paradigm can be systematically applied to establish eye gaze patterns associated with language processing in autism, which will help us better understand the nature of sentence processing mechanisms in autism as well as help to identify early clinical markers for autism.

Subscription Required. Please recommend JoVE to your librarian.

Disclosures

The authors have nothing to disclose.

Acknowledgments

This work was funded by the National Social Science Foundation of China [16BYY076] to Peng Zhou and the Science Foundation of Beijing Language and Cultural University under the Fundamental Research Funds for the Central Universities [15YJ050003]. The authors are grateful to the children, parents, and teachers at the Enqi Autism Platform and Taolifangyuan Kindergarten in Beijing, China, for their support in running the study.

Materials

Name Company Catalog Number Comments
EyeLink 1000 plus eye tracker  SR Research Ltd.  The EyeLink 1000 plus allows remote eye tracking, without a head support. The eye tracker provides information about the participant’s point of gaze at a sampling rate of 500 Hz, and it has accuracy of 0.5 degrees of visual angle. 

DOWNLOAD MATERIALS LIST

References

  1. Altmann, G. T., Kamide, Y. Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition. 73 (3), 247-264 (1999).
  2. Altmann, G. T., Kamide, Y. The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing. Journal of Memory and Language. 57 (4), 502-518 (2007).
  3. DeLong, K. A., Urbach, T. P., Kutas, M. Probabilistic word pre-activation during language comprehension inferred from electrical brain activity. Nature neuroscience. 8 (8), 1117-1121 (2005).
  4. Kamide, Y., Altmann, G. T., Haywood, S. L. The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language. 49 (1), 133-156 (2003).
  5. Knoeferle, P., Crocker, M. W., Scheepers, C., Pickering, M. J. The influence of the immediate visual context on incremental thematic role-assignment: Evidence from eye-movements in depicted events. Cognition. 95 (1), 95-127 (2005).
  6. Knoeferle, P., Kreysa, H. Can speaker gaze modulate syntactic structuring and thematic role assignment during spoken sentence comprehension. Frontiers in Psychology. 3, 538 (2012).
  7. Knoeferle, P., Urbach, T. P., Kutas, M. Comprehending how visual context influences incremental sentence processing: Insights from ERPs and picture-sentence verification. Psychophysiology. 48 (4), 495-506 (2011).
  8. Pickering, M. J., Traxler, M. J., Crocker, M. W. Ambiguity resolution in sentence processing: Evidence against frequency-based accounts. Journal of Memory and Language. 43 (3), 447-475 (2000).
  9. Staub, A., Clifton, C. Syntactic prediction in language comprehension: Evidence from either... or. Journal of Experimental Psychology: Learning, Memory, and Cognition. 32 (2), 425-436 (2006).
  10. Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., Sedivy, J. Integration of visual and linguistic information in spoken language comprehension. Science. 268 (5217), 1632-1634 (1995).
  11. Van Berkum, J. J., Brown, C. M., Zwitserlood, P., Kooijman, V., Hagoort, P. Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition. 31 (3), 443-467 (2005).
  12. Choi, Y., Trueswell, J. C. Children's (in) ability to recover from garden paths in a verb-final language: Evidence for developing control in sentence processing. Journal of Experimental Child Psychology. 106 (1), 41-61 (2010).
  13. Huang, Y., Zheng, X., Meng, X., Snedeker, J. Children's assignment of grammatical roles in the online processing of Mandarin passive sentences. Journal of Memory and Language. 69 (4), 589-606 (2013).
  14. Lew-Williams, C., Fernald, A. Young children learning Spanish make rapid use of grammatical gender in spoken word recognition. Psychological Science. 18 (3), 193-198 (2007).
  15. Sekerina, I. A., Trueswell, J. C. Interactive processing of contrastive expressions by Russian children. First Language. 32 (1-2), 63-87 (2012).
  16. Trueswell, J. C., Sekerina, I., Hill, N. M., Logrip, M. L. The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition. 73 (2), 89-134 (1999).
  17. Van Heugten, M., Shi, R. French-learning toddlers use gender information on determiners during word recognition. Developmental Science. 12 (3), 419-425 (2009).
  18. Zhou, P., Crain, S., Zhan, L. Grammatical aspect and event recognition in children's online sentence comprehension. Cognition. 133 (1), 262-276 (2014).
  19. Zhou, P., Ma, W. Children's use of morphological cues in real-time event representation. Journal of Psycholinguistic Research. 47 (1), 241-260 (2018).
  20. Eigsti, I. M., Bennetto, L., Dadlani, M. B. Beyond pragmatics: Morphosyntactic development in autism. Journal of Autism and Developmental Disorders. 37 (6), 1007-1023 (2007).
  21. Kjelgaard, M. M., Tager-Flusberg, H. An investigation of language impairment in autism: Implications for genetic subgroups. Language and Cognitive Processes. 16 (2-3), 287-308 (2001).
  22. Tager-Flusberg, H. Risk factors associated with language in autism spectrum disorder: clues to underlying mechanisms. Journal of Speech, Language, and Hearing Research. 59 (1), 143-154 (2016).
  23. Tager-Flusberg, H., Kasari, C. Minimally verbal school-aged children with autism spectrum disorder: the neglected end of the spectrum. Autism Research. 6 (6), 468-478 (2013).
  24. Tek, S., Mesite, L., Fein, D., Naigles, L. Longitudinal analyses of expressive language development reveal two distinct language profiles among young children with autism spectrum disorders. Journal of Autism and Developmental Disorders. 44 (1), 75-89 (2014).
  25. Wittke, K., Mastergeorge, A. M., Ozonoff, S., Rogers, S. J., Naigles, L. R. Grammatical language impairment in autism spectrum disorder: Exploring language phenotypes beyond standardized testing. Frontiers in Psychology. 8, 532 (2017).
  26. Hudry, K., Leadbitter, K., Temple, K., Slonims, V., McConachie, H., Aldred, C., et al. Preschoolers with autism show greater impairment in receptive compared with expressive language abilities. International Journal of Language & Communication Disorders. 45 (6), 681-690 (2010).
  27. Kasari, C., Brady, N., Lord, C., Tager-Flusberg, H. Assessing the minimally verbal school-aged child with autism spectrum disorder. Autism Research. 6 (6), 479-493 (2013).
  28. Luyster, R. J., Kadlec, M. B., Carter, A., Tager-Flusberg, H. Language assessment and development in toddlers with autism spectrum disorders. Journal of Autism and Developmental Disorders. 38 (8), 1426-1438 (2008).
  29. Plesa-Skwerer, D., Jordan, S. E., Brukilacchio, B. H., Tager-Flusberg, H. Comparing methods for assessing receptive language skills in minimally verbal children and adolescents with Autism Spectrum Disorders. Autism. 20 (5), 591-604 (2016).
  30. Boucher, J. Research review: Structural language in autism spectrum disorder-characteristics and causes. Journal of Child Psychology and Psychiatry. 53 (3), 219-233 (2012).
  31. Eigsti, I. M., de Marchena, A. B., Schuh, J. M., Kelley, E. Language acquisition in autism spectrum disorders: A developmental review. Research in Autism Spectrum Disorders. 5 (2), 681-691 (2011).
  32. Howlin, P. Outcome in high-functioning adults with autism with and without early language delays: Implications for the differentiation between autism and Asperger syndrome. Journal of Autism and Developmental Disorders. 33 (1), 3-13 (2003).
  33. Koning, C., Magill-Evans, J. Social and language skills in adolescent boys with Asperger syndrome. Autism: The International Journal of Research and Practice. 5 (1), 23-36 (2001).
  34. Kover, S. T., Haebig, E., Oakes, A., McDuffie, A., Hagerman, R. J., Abbeduto, L. Sentence comprehension in boys with autism spectrum disorder. American Journal of Speech-Language Pathology. 23 (3), 385-394 (2004).
  35. Perovic, A., Modyanova, N., Wexler, K. Comprehension of reflexive and personal pronouns in children with autism: A syntactic or pragmatic deficit. Applied Psycholinguistics. 34 (4), 813-835 (2013).
  36. Rapin, I., Dunn, M. Update on the language disorders of individuals on the autistic spectrum. Brain Development. 25 (3), 166-172 (2003).
  37. Tager-Flusberg, H. Sentence comprehension in autistic children. Applied Psycholinguistics. 2 (1), 5-24 (1981).
  38. Rutter, M., Maywood, L., Howlin, P. Language delay and social development. Specific speech and language disorders in children: Correlates, characteristics, and outcomes. Fletcher, P., Hall, D. , Whurr. London. (1992).
  39. Tager-Flusberg, H. The challenge of studying language development in autism. Methods for studying language production. Menn, L., Ratner, N. B. , Erlbaum. Nahwah, NJ. (2000).
  40. American Psychiatric Association. Diagnostic and statistical manual of mental disorders, 4th edition, text revision (DSM-IV-TR). American Psychiatric Association. , Washington, DC. (2000).
  41. American Psychiatric Association. Diagnostic and statistical manual of mental disorders, 5th edition (DSM-5). American Psychiatric Association. , Washington, DC. (2013).
  42. Lord, C., Rutter, M., DiLavore, P. C., Risi, S. Autism diagnostic observation schedule. Western Psychological Services. , Los Angeles, CA. (1999).
  43. Li, Y., Zhu, J. Wechsler Preschool and Primary Scale of Intelligence™-IV CN) [WPPSI-IV (CN)]. Zhuhai King-may Psychological Measurement Technology Development Co., Ltd. , Zhuhai. (2014).
  44. Cooper, R. M. The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology. 6 (1), 84-107 (1974).
  45. Huang, Y., Snedeker, J. Semantic meaning and pragmatic interpretation in five-year olds: Evidence from real time spoken language comprehension. Developmental Psychology. 45 (6), 1723-1739 (2009).
  46. Snedeker, J., Yuan, S. Effects of prosodic and lexical constraints on parsing in young children (and adults). Journal of Memory and Language. 58 (2), 574-608 (2008).
  47. Zhou, P., Crain, S., Zhan, L. Sometimes children are as good as adults: The pragmatic use of prosody in children's on-line sentence processing. Journal of Memory and Language. 67 (1), 149-164 (2012).
  48. Matin, E., Shao, K. C., Boff, K. R. Saccadic overhead: Information-processing time with and without saccades. Perception & Psychophysics. 53 (4), 372-380 (1993).
  49. Barr, D. J. Analyzing 'visual world' eyetracking data using multilevel logistic regression. Journal of Memory and Language. 59 (4), 457-474 (2008).
  50. Baayen, R. H., Davidson, D. J., Bates, D. M. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language. 59 (4), 390-412 (2008).
  51. Bates, D. M., Maechler, M., Bolker, B. lme4: Linear mixed-effects models using S4 classes. , Available from: http://cran.r-project.org/web/packages/lme4/index.html (2013).
  52. R Development Core Team. R: A Language and Environment for Statistical Computing. , Vienna. Available from: http://www.r-project.org (2017).

Tags

Visual World Paradigm Sentence Comprehension Mandarin-speaking Children Autism Linguistic Information Non-linguistic Information Time Code Minimal Task Demand Communication Demand Eye Movements Linguistic Input Conscious Judgments Computational Model Test Stimuli Visual Target Items Morphological Markers BA And BEI Image Editing Software Visual Image Template Event Roles Audio Editing Software Spoken Sentences Recording Conditions
Using the Visual World Paradigm to Study Sentence Comprehension in Mandarin-Speaking Children with Autism
Play Video
PDF DOI DOWNLOAD MATERIALS LIST

Cite this Article

Zhou, P., Ma, W., Zhan, L., Ma, H.More

Zhou, P., Ma, W., Zhan, L., Ma, H. Using the Visual World Paradigm to Study Sentence Comprehension in Mandarin-Speaking Children with Autism. J. Vis. Exp. (140), e58452, doi:10.3791/58452 (2018).

Less
Copy Citation Download Citation Reprints and Permissions
View Video

Get cutting-edge science videos from JoVE sent straight to your inbox every month.

Waiting X
Simple Hit Counter