Although research on language production has developed detailed maps of the brain basis of single word production in both time and space, little is known about the spatiotemporal dynamics of the processes that combine individual words into larger representations during production. Studying composition in production is challenging due to difficulties both in controlling produced utterances and in measuring the associated brain responses. Here, we circumvent both problems using a minimal composition paradigm combined with the high temporal resolution of magnetoencephalography (MEG). With MEG, we measured the planning stages of simple adjective-noun phrases ('red tree'), matched list controls ('red, blue'), and individual nouns ('tree') and adjectives ('red'), with results indicating combinatorial processing in the ventro-medial prefrontal cortex (vmPFC) and left anterior temporal lobe (LATL), two regions previously implicated for the comprehension of similar phrases. These effects began relatively quickly (?180 ms) after the presentation of a production prompt, suggesting that combination commences with initial lexical access. Further, while in comprehension, vmPFC effects have followed LATL effects, in this production paradigm vmPFC effects occurred mostly in parallel with LATL effects, suggesting that a late process in comprehension is an early process in production. Thus, our results provide a novel neural bridge between psycholinguistic models of comprehension and production that posit functionally similar combinatorial mechanisms operating in reversed order.
Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors.
This study addresses a much-debated effect on a much-debated region: the increase of left inferior frontal gyrus (LIFG) activation associated with object-extracted relative clauses. This haemodynamic result is one of the most central and most cited findings in the cognitive neuroscience of syntax and it has robustly contributed to the popular association of Broca's region with syntax. Our study had two goals: (1) to characterise the timing of this classic effect with magnetoencephalography (MEG) and (2) to connect it to psycholinguistic research on the effects of similarity-based interference during sentence processing. Specifically, behavioural studies have shown that object relatives are primarily only costly when the two preverbal noun phrases are parallel in their surface syntax, for example, both consisting of a definite determiner and a noun (e.g. the reporter who the senator attacked), as opposed to employing, for example, a definite noun phrase and a proper name (the reporter who Bill attacked). This finding suggests that the difficulty of object extraction lies not within its syntax but rather in similarity-based interference affecting working memory processes. Although working memory is a prominent hypothesis for the LIFG engagement in object extraction, the haemodynamic literature has routinely employed stimuli involving parallel as opposed to non-parallel syntax. Using written sentences presented word-by-word, we tested whether an LIFG effect of object extraction is obtained with MEG, allowing us to characterise its timing, and whether it reduces or disappears if the two preverbal noun phrases are non-parallel in their surface syntax. Our results show an LIFG increase for object relatives at around 600 ms after verb onset, but only when the preverbal arguments are parallel. These findings are consistent with memory and competition-based explanations of the LIFG effect of object extraction and challenge accounts attributing it to displacement.
The left anterior temporal lobe (LATL) is robustly implicated in semantic processing by a growing body of literature. However, these results have emerged from two distinct bodies of work, addressing two different processing levels. On the one hand, the LATL has been characterized as a 'semantic hub? that binds features of concepts across a distributed network, based on results from semantic dementia and hemodynamic findings on the categorization of specific compared to basic exemplars. On the other, the LATL has been implicated in combinatorial operations in language, as shown by increased activity in this region associated with the processing of sentences and of basic phrases. The present work aimed to reconcile these two literatures by independently manipulating combination and concept specificity within a minimal MEG paradigm. Participants viewed simple nouns that denoted either low specificity (fish) or high specificity categories (trout) presented in either combinatorial (spotted fish/trout) or non-combinatorial contexts (xhsl fish/trout). By combining these paradigms from the two literatures, we directly compared the engagement of the LATL in semantic memory vs. semantic composition. Our results indicate that although noun specificity subtly modulates the LATL activity elicited by single nouns, it most robustly affects the size of the composition effect when these nouns are adjectivally modified, with low specificity nouns eliciting a much larger effect. We conclude that these findings are compatible with an account in which the specificity and composition effects arise from a shared mechanism of meaning specification.
The left anterior temporal lobe (LATL) has risen as a leading candidate for a brain locus of composition in language; yet the computational details of its function are unknown. Although most literature discusses it as a combinatory region in very general terms, it has also been proposed to reflect the more specific function of conceptual combination, which in the classic use of this term mainly pertains to the combination of open class words with obvious conceptual contributions. We aimed to distinguish between these two possibilities by contrasting plural nouns in contexts where they were either preceded by a color modifier ("red cups"), eliciting conceptual combination, or by a number word ("two cups"), eliciting numeral quantification but no conceptual combination. This contrast was chosen because within a production task, it allows the manipulation of composition type while keeping the physical stimulus constant: a display of two red cups can be named as "two cups" or "red cups" depending on the task instruction. These utterances were compared to productions of two-word number and color lists, intended as non-combinatory control conditions. Magnetoencephalography activity was recorded during the planning for production, prior to motion artifacts. As expected on the basis of comprehension studies, color modification elicited increased LATL activity as compared to color lists, demonstrating that this basic combinatory effect is strongly crossmodal. However, numeral quantification did not elicit a parallel effect, suggesting that the function of the LATL is (i) semantic and not syntactic (given that both color modification and numeral quantification involve syntactic composition) and (ii) corresponds more closely to the classical psychological notion of conceptual combination as opposed to a more general semantic combinatory function.
The present study investigates whether a minimal manipulation in task demands can induce core linguistic combinatorial mechanisms to extend beyond the bounds of normal grammatical phrases. Using magnetoencephalography, we measured neural activity evoked by the processing of adjective-noun phrases in canonical (red cup) and reversed order (cup red). During a task not requiring composition (verification against a color blob and shape outline), we observed significant combinatorial activity during canonical phrases only - as indexed by minimum norm source activity localized to the left anterior temporal lobe at 200-250 ms(cf. , ). When combinatorial task demands were introduced (by simply combining the blob and outline into a single colored shape) we observed significant combinatorial activity during reversed sequences as well. These results demonstrate the first direct evidence that basic linguistic combinatorial mechanisms can be deployed outside of normal grammatical expressions in response to task demands, independent of changes in lexical or attentional factors.
The expressive power of language lies in its ability to construct an infinite array of ideas out of a finite set of pieces. Surprisingly, few neurolinguistic investigations probe the basic processes that constitute the foundation of this ability, choosing instead to focus on relatively complex combinatorial operations. Contrastingly, in the present work, we investigate the neural circuits underlying simple linguistic composition, such as required by the minimal phrase "red boat." Using magnetoencephalography, we examined activity in humans generated at the visual presentation of target nouns, such as "boat," and varied the combinatorial operations induced by its surrounding context. Nouns in minimal compositional contexts ("red boat") were compared with those appearing in matched non-compositional contexts, such as after an unpronounceable consonant string ("xkq boat") or within a list ("cup, boat"). Source analysis did not implicate traditional language areas (inferior frontal gyrus, posterior temporal regions) in such basic composition. Instead, we found increased combinatorial-related activity in the left anterior temporal lobe (LATL) and ventromedial prefrontal cortex (vmPFC). These regions have been linked previously to syntactic (LATL) and semantic (vmPFC) combinatorial processing in more complex linguistic contexts. Thus, we suggest that these regions play a role in basic syntactic and semantic composition, respectively. Importantly, the temporal ordering of the effects, in which LATL activity (?225 ms) precedes vmPFC activity (?400 ms), is consistent with many processing models that posit syntactic composition before semantic composition during the construction of linguistic representations.
There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ?100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects.
Most words are associated with multiple senses. A DVD can be round (when describing a disc), and a DVD can be an hour long (when describing a movie), and in each case DVD means something different. The possible senses of a word are often predictable, and also constrained, as words cannot take just any meaning: for example, although a movie can be an hour long, it cannot sensibly be described as round (unlike a DVD). Learning the scope and limits of word meaning is vital for the comprehension of natural language, but poses a potentially difficult learnability problem for children. By testing what senses children are willing to assign to a variety of words, we demonstrate that, in comprehension, the problem is solved using a productive learning strategy. Children are perfectly capable of assigning different senses to a word; indeed they are essentially adult-like at assigning licensed meanings. But difficulties arise in determining which senses are assignable: children systematically overestimate the possible senses of a word, allowing meanings that adults rule unlicensed (e.g., taking round movie to refer to a disc). By contrast, this strategy does not extend to production, in which children use licensed, but not unlicensed, senses. Childrens productive comprehension strategy suggests an early emerging facility for using context in sense resolution (a difficult task for natural language processing algorithms), but leaves an intriguing question as to the mechanisms children use to learn a restricted, adult-like set of senses.
What kind of mental objects are letters? Research on letter perception has mainly focussed on the visual properties of letters, showing that orthographic representations are abstract and size/shape invariant. But given that letters are, by definition, mappings between symbols and sounds, what is the role of sound in orthographic representation? We present two experiments suggesting that letters are fundamentally sound-based representations. To examine the role of sound in orthographic representation, we took advantage of the multiple scripts of Japanese. We show two types of evidence that if a Japanese word is presented in a script it never appears in, this presentation immediately activates the ("actual") visual word form of that lexical item. First, equal amounts of masked repetition priming are observed for full repetition and when the prime appears in an atypical script. Second, visual word form frequency affects neuromagnetic measures already at 100-130 ms whether the word is presented in its conventional script or in a script it never otherwise appears in. This suggests that Japanese orthographic codes are not only shape-invariant, but also script invariant. The finding that two characters belonging to different writing systems can activate the same form representation suggests that sound identity is what determines orthographic identity: as long as two symbols express the same sound, our minds represent them as part of the same character/letter.
Syntactic factors can rapidly affect behavioral and neural responses during language processing; however, the mechanisms that allow this rapid extraction of syntactically relevant information remain poorly understood. We addressed this issue using magnetoencephalography and found that an unexpected word category (e.g., "The recently princess . . . ") elicits enhanced activity in visual cortex as early as 120 ms after exposure, and that this activity occurs as a function of the compatibility of a words form with the form properties associated with a predicted word category. Because no sensitivity to linguistic factors has been previously reported for words in isolation at this stage of visual analysis, we propose that predictions about upcoming syntactic categories are translated into form-based estimates, which are made available to sensory cortices. This finding may be a key component to elucidating the mechanisms that allow the extreme rapidity and efficiency of language comprehension.
The neural basis of syntax is a matter of substantial debate. In particular, the inferior frontal gyrus (IFG), or Brocas area, has been prominently linked to syntactic processing, but the anterior temporal lobe has been reported to be activated instead of IFG when manipulating the presence of syntactic structure. These findings are difficult to reconcile because they rely on different laboratory tasks which tap into distinct computations, and may only indirectly relate to natural sentence processing. Here we assessed neural correlates of syntactic structure building in natural language comprehension, free from artificial task demands. Subjects passively listened to Alice in Wonderland during functional magnetic resonance imaging and we correlated brain activity with a word-by-word measure of the amount syntactic structure analyzed. Syntactic structure building correlated with activity in the left anterior temporal lobe, but there was no evidence for a correlation between syntactic structure building and activity in inferior frontal areas. Our results suggest that the anterior temporal lobe computes syntactic structure under natural conditions.
To study the neural bases of semantic composition in language processing without confounds from syntactic composition, recent magnetoencephalography (MEG) studies have investigated the processing of constructions that exhibit some type of syntax-semantics mismatch. The most studied case of such a mismatch is complement coercion; expressions such as the author began the book, where an entity-denoting noun phrase is coerced into an eventive meaning in order to match the semantic properties of the event-selecting verb (e.g., the author began reading/writing the book). These expressions have been found to elicit increased activity in the Anterior Midline Field (AMF), an MEG component elicited at frontomedial sensors at approximately 400 ms after the onset of the coercing noun [Pylkkänen, L., & McElree, B. (2007). An MEG study of silent meaning. Journal of Cognitive Neuroscience, 19, 11]. Thus, the AMF constitutes a potential neural correlate of coercion. However, the AMF was generated in ventromedial prefrontal regions, which are heavily associated with decision-making. This raises the possibility that, instead of semantic processing, the AMF effect may have been related to the experimental task, which was a sensicality judgment. We tested this hypothesis by assessing the effect of coercion when subjects were simply reading for comprehension, without a decision-task. Additionally, we investigated coercion in an adjectival rather than a verbal environment to further generalize the findings. Our results show that an AMF effect of coercion is elicited without a decision-task and that the effect also extends to this novel syntactic environment. We conclude that in addition to its role in non-linguistic higher cognition, ventromedial prefrontal regions contribute to the resolution of syntax-semantics mismatches in language processing.
One of the most intriguing findings on language comprehension is that violations of syntactic predictions can affect event-related potentials as early as 120 ms, in the same time-window as early sensory processing. This effect, the so-called early left-anterior negativity (ELAN), has been argued to reflect word category access and initial syntactic structure building (Friederici, 2002). In two experiments, we used magnetoencephalography to investigate whether (a) rapid word category identification relies on overt category-marking closed-class morphemes and (b) whether violations of word category predictions affect modality-specific sensory responses. Participants read sentences containing violations of word category predictions. Unexpected items varied in whether or not their word category was marked by an overt function morpheme. In Experiment 1, the amplitude of the visual evoked M100 component was increased for unexpected items, but only when word category was overtly marked by a function morpheme. Dipole modeling localized the generator of this effect to the occipital cortex. Experiment 2 replicated the main results of Experiment 1 and eliminated two non-morphology-related explanations of the M100 contrast we observed between targets containing overt category-marking and targets that lacked such morphology. Our results show that during reading, syntactically relevant cues in the input can affect activity in occipital regions at around 125 ms, a finding that may shed new light on the remarkable rapidity of language processing.
Debates surrounding the evolution of language often hinge upon its relationship to cognition more generally and many investigations have attempted to demark the boundary between the two. Though results from these studies suggest that language may recruit domain-general mechanisms during certain types of complex processing, the domain-generality of basic combinatorial mechanisms that lie at the core of linguistic processing is still unknown. Our previous work (Bemis and Pylkkänen, 2011, 2012) used magnetoencephalography to isolate neural activity associated with the simple composition of an adjective and a noun ("red boat") and found increased activity during this processing localized to the left anterior temporal lobe (lATL), ventro-medial prefrontal cortex (vmPFC), and left angular gyrus (lAG). The present study explores the domain-generality of these effects and their associated combinatorial mechanisms through two parallel non-linguistic combinatorial tasks designed to be as minimal and natural as the linguistic paradigm. In the first task, we used pictures of colored shapes to elicit combinatorial conceptual processing similar to that evoked by the linguistic expressions and find increased activity again localized to the vmPFC during combinatorial processing. This result suggests that a domain-general semantic combinatorial mechanism operates during basic linguistic composition, and that activity generated by its processing localizes to the vmPFC. In the second task, we recorded neural activity as subjects performed simple addition between two small numerals. Consistent with a wide array of recent results, we find no effects related to basic addition that coincide with our linguistic effects and instead find increased activity localized to the intraparietal sulcus. This result suggests that the scope of the previously identified linguistic effects is restricted to compositional operations and does not extend generally to all tasks that are merely similar in form.
Theoretical advances in language research and the availability of increasingly high-resolution experimental techniques in the cognitive neurosciences are profoundly changing how we investigate and conceive of the neural basis of speech and language processing. Recent work closely aligns language research with issues at the core of systems neuroscience, ranging from neurophysiological and neuroanatomic characterizations to questions about neural coding. Here we highlight, across different aspects of language processing (perception, production, sign language, meaning construction), new insights and approaches to the neurobiology of language, aiming to describe promising new areas of investigation in which the neurosciences intersect with linguistic research more closely than before. This paper summarizes in brief some of the issues that constitute the background for talks presented in a symposium at the Annual Meeting of the Society for Neuroscience. It is not a comprehensive review of any of the issues that are discussed in the symposium.
It is widely assumed that prediction plays a substantial role in language processing. However, despite numerous studies demonstrating that contextual information facilitates both syntactic and lexical-semantic processing, there exists no direct evidence pertaining to the neural correlates of the prediction process itself. Using magnetoencephalography (MEG), this study found that brain activity was modulated by whether or not a specific noun could be predicted, given a picture prime. Specifically, before the noun was presented, predictive contexts triggered enhanced activation in left mid-temporal cortex (implicated in lexical access), ventro-medial prefrontal cortex (previously associated with top-down processing), and visual cortex (hypothesized to index the preactivation of predicted form features), successively. This finding suggests that predictive language processing recruits a top-down network where predicted words are activated at different levels of representation, from more abstract lexical-semantic representations in temporal cortex, all the way down to visual word form features. The same brain regions that exhibited enhanced activation for predictive contexts before the onset of the noun showed effects of congruence during the target word. To our knowledge, this study is one of the first to directly investigate the anticipatory stage of predictive language processing.
Sentence comprehension involves a host of highly interrelated processes, including syntactic parsing, semantic composition, and pragmatic inferencing. In neuroimaging, a primary paradigm for examining the brain bases of sentence processing has been to compare brain activity elicited by sentences versus unstructured lists of words. These studies commonly find an effect of increased activity for sentences in the anterior temporal lobes (aTL). Together with neuropsychological data, these findings have motivated the hypothesis that the aTL is engaged in sentence level combinatorics. Combinatoric processing during language comprehension, however, occurs within tens and hundreds of milliseconds, i.e., at a time-scale much faster than the temporal resolution of hemodynamic measures. Here, we examined the time-course of sentence-level processing using magnetoencephalography (MEG) to better understand the temporal profile of activation in this common paradigm and to test a key prediction of the combinatoric hypothesis: because sentences are interpreted incrementally, word-by-word, activity associated with basic linguistic combinatorics should be time-locked to word-presentation. Our results reveal increased anterior temporal activity for sentences compared to word lists beginning approximately 250 ms after word onset. We also observed increased activation in a network of other brain areas, extending across posterior temporal, inferior frontal, and ventral medial areas. These findings confirm a key prediction of the combinatoric hypothesis for the aTL and further elucidate the spatio-temporal characteristics of sentence-level computations in the brain.
Language is rife with ambiguity. Do children and adults meet this challenge in similar ways? Recent work suggests that while adults resolve syntactic ambiguities by integrating a variety of cues, children are less sensitive to top-down evidence. We test whether this top-down insensitivity is specific to syntax or a general feature of childrens linguistic ambiguity resolution by evaluating whether children rely largely or completely on lexical associations to resolve lexical ambiguities (e.g., the word swing primes the baseball meaning of bat) or additionally integrate top-down global plausibility. Using a picture choice task, we compared 4-year-olds ability to resolve polysemes and homophones with a Bayesian algorithm reliant purely on lexical associations and found that the algorithms power to predict childrens choices was limited. A 2nd experiment confirmed that children override associations and integrate top-down plausibility. We discuss this with regard to models of psycholinguistic development.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.