In speech production, an important step before motor programming is the retrieval and encoding of the phonological elements of target words. It has been proposed that phonological encoding is supported by multiple regions in the left frontal, temporal and parietal regions and their underlying white matter, especially the left arcuate fasciculus (AF) or superior longitudinal fasciculus (SLF). It is unclear, however, whether the effects of AF/SLF are indeed related to phonological encoding for output and whether there are other white matter tracts that also contribute to this process. We comprehensively investigated the anatomical connectivity supporting phonological encoding in production by studying the relationship between the integrity of all major white matter tracts across the entire brain and phonological encoding deficits in a group of 69 patients with brain damage. The integrity of each white matter tract was measured both by the percentage of damaged voxels (structural imaging) and the mean fractional anisotropy value (diffusion tensor imaging). The phonological encoding deficits were assessed by various measures in two oral production tasks that involve phonological encoding: the percentage of nonword (phonological) errors in oral picture naming and the accuracy of word reading aloud with word comprehension ability regressed out. We found that the integrity of the left SLF in both the structural and diffusion tensor imaging measures consistently predicted the severity of phonological encoding impairment in the two phonological production tasks. Such effects of the left SLF on phonological production remained significant when a range of potential confounding factors were considered through partial correlation, including total lesion volume, demographic factors, lesions on phonological-relevant grey matter regions, or effects originating from the phonological perception or semantic processes. Our results therefore conclusively demonstrate the central role of the left SLF in phonological encoding in speech production.
The placement and development of the visual word form area (VWFA) have commonly been assumed to depend, in part, on its connections with language regions. In this study, we specifically examined the effects of auditory speech experience deprivation in shaping the VWFA by investigating its location distribution, activation strength, and functional connectivity pattern in congenitally deaf participants. We found that the location and activation strength of the VWFA in congenitally deaf participants were highly comparable with those of hearing controls. Furthermore, while the congenitally deaf group showed reduced resting-state functional connectivity between the VWFA and the auditory speech area in the left anterior superior temporal gyrus, its intrinsic functional connectivity pattern between the VWFA and a fronto-parietal network was similar to that of hearing controls. Taken together, these results suggest that auditory speech experience has consequences for aspects of the word form-speech sound correspondence network, but that such experience does not significantly modulate the VWFA's placement or response strength. This is consistent with the view that the role of the VWFA might be to provide a representation that is suitable for mapping visual word forms onto language-specific gestures without the need to construct an aural representation.
Knowledge of object shape is primarily acquired through the visual modality but can also be acquired through other sensory modalities. In the present study, we investigated the representation of object shape in humans without visual experience. Congenitally blind and sighted participants rated the shape similarity of pairs of 33 familiar objects, referred to by their names. The resulting shape similarity matrices were highly similar for the two groups, indicating that knowledge of the objects' shapes was largely independent of visual experience. Using fMRI, we tested for brain regions that represented object shape knowledge in blind and sighted participants. Multivoxel activity patterns were established for each of the 33 aurally presented object names. Sighted participants additionally viewed pictures of these objects. Using representational similarity analysis, neural similarity matrices were related to the behavioral shape similarity matrices. Results showed that activity patterns in occipitotemporal cortex (OTC) regions, including inferior temporal (IT) cortex and functionally defined object-selective cortex (OSC), reflected the behavioral shape similarity ratings in both blind and sighted groups, also when controlling for the objects' tactile and semantic similarity. Furthermore, neural similarity matrices of IT and OSC showed similarities across blind and sighted groups (within the auditory modality) and across modality (within the sighted group), but not across both modality and group (blind auditory-sighted visual). Together, these findings provide evidence that OTC not only represents objects visually (requiring visual experience) but also represents objects nonvisually, reflecting knowledge of object shape independently of the modality through which this knowledge was acquired.
In human lateral temporal cortex, some regions show specific sensitivity to human motion. Here we examine whether such effects reflect a general biological-nonbiological organizational principle or a process specific to human-agent processing by comparing processing of human, animal, and tool motion in a functional magnetic resonance imaging (fMRI) experiment with healthy participants and a voxel-based lesion-symptom mapping (VLSM) study of patients with brain damage (77 stroke patients). The fMRI experiment revealed that in the lateral temporal cortex, the posterior superior temporal sulcus shows a preference for human and animal motion, whereas the middle part of the right superior temporal sulcus/gyrus (mSTS/STG) shows a preference for human and functional tool motion. VLSM analyses also revealed that damage to this right mSTS/STG region led to more severe impairment in the recognition of human and functional tool motion relative to animal motion, indicating the causal role of this brain area in human-agent motion processing. The findings for the right mSTS/STG cannot be reduced to a preference for articulated motion or processing of social variables since neither factor is involved in functional tool motion recognition. We conclude that a unidimensional biological-nonbiological distinction cannot fully explain the visual motion effects in lateral temporal cortex. Instead, the results suggest the existence of distinct components in right posterior temporal cortex and mSTS/STG that are associated, respectively, with biological motion and human-agent motion processing.
Widely distributed brain regions in temporal, parietal and frontal cortex have been found to be involved in semantic processing, but the anatomical connections supporting the semantic system are not well understood. In a group of 76 right-handed brain-damaged patients, we tested the relationship between the integrity of major white matter tracts and the presence of semantic deficits. The integrity of white matter tracts was measured by percentage of lesion voxels obtained in structural imaging and mean fractional anisotropy values obtained in diffusion tensor imaging. Semantic deficits were assessed by jointly considering the performance on three semantic tasks that vary in the modalities of input (visual and auditory stimuli) and output (oral naming and associative judgement). We found that the lesion volume and fractional anisotropy value of the left inferior fronto-occipital fasciculus, left anterior thalamic radiation, and left uncinate fasciculus significantly correlated with severity of impairment in all three semantic tasks. These associations remained significant even when we controlled for a wide range of potential confounding variables, including overall cognitive state, whole lesion volume, or type of brain damage. The effects of these three white matter tracts could not be explained by potential involvement of relevant grey matter, and were (relatively) specific to object semantic processing, as no correlation with performance on non-object semantic control tasks (oral repetition and number processing tasks) was observed. These results underscore the causal role of left inferior fronto-occipital fasciculus, left anterior thalamic radiation, and left uncinate fasciculus in semantic processing, providing direct evidence for (part of) the anatomical skeleton of the semantic network.
The principles that determine the organization of object representations in ventral temporal cortex (VTC) remain elusive. Here, we focus on the parahippocampal place area (PPA), a region in medial VTC that has been shown to respond selectively to pictures of scenes. Recent studies further observed that this region also shows a preference for large nonmanipulable objects relative to other objects, which might reflect the suitability of large objects for navigation. The mechanisms underlying this selectivity remain poorly understood. We examined the extent to which PPA selectivity requires visual experience. Fourteen congenitally blind and matched sighted participants were tested on an auditory size judgment experiment involving large nonmanipulable objects, small objects (tools), and animals. Sighted participants additionally participated in a picture-viewing experiment. Replicating previous work, we found that the PPA responded selectively to large nonmanipulable objects, relative to tools and animals, in the sighted group viewing pictures. Importantly, this selectivity was also observed in the auditory experiment in both sighted and congenitally blind groups. In both groups, selectivity for large nonmanipulable objects was additionally observed in the retrosplenial complex (RSC) and the transverse occipital sulcus (TOS), regions previously implicated in scene perception and navigation. Finally, in both groups the PPA showed resting-state functional connectivity with TOS and RSC. These results provide new evidence that large object selectivity in PPA, and the intrinsic connectivity between PPA and other navigation-relevant regions, do not require visual experience. More generally, they show that the organization of object representations in VTC can develop, at least partly, without visual experience.
Knowledge of the physical attributes of objects is commonly assumed to be distributed near their respective modality-specific brain regions. The exact neural correlates for such knowledge, especially how it is maintained in the resting state, are largely unknown. In the current study, we explored the intrinsic neural basis related to a specific type of object knowledge - color - by investigating the relationship between spontaneous brain activity and color knowledge behavioral performance. We correlated the regional amplitude of spontaneous low-frequency fluctuations (ALFF, a resting-state fMRI parameter) with healthy participants performance on two object color knowledge tasks (object color verification and color attribute judgment). We found that ALFF in bilateral lingual and fusiform gyri and right inferior occipital gyrus reliably predicted participants color knowledge performance (correlation coefficients=0.55-0.70), and that calcarine cortex showed a similar trend, although less stable. Furthermore, the ALFF-behavior correlations for other types of object knowledge (i.e. form, motion and sound) in these regions were minimal and significantly lower than those for color knowledge, suggesting that the effects in the observed regions were not merely due to general object processing. Furthermore, we showed that functional connectivity strengths of the lingual/fusiform and inferior occipital regions are significantly associated with color knowledge performance, indicating that they work as a network to support color knowledge processing or the acquisition of such knowledge. Our findings show the critical role of ventral medial occipito-temporal regions in processing or acquiring color knowledge and highlight the behavioral significance of spontaneous brain activity in the resting state.
This study examined whether the degree of complexity of a grammatical component in a language would impact on its representation in the brain through identifying the neural correlates of grammatical morpheme processing associated with nouns and verbs in Chinese. In particular, the processing of Chinese nominal classifiers and verbal aspect markers were investigated in a sentence completion task and a grammaticality judgment task to look for converging evidence. The Chinese language constitutes a special case because it has no inflectional morphology per se and a larger classifier than aspect marker inventory, contrary to the pattern of greater verbal than nominal paradigmatic complexity in most European languages. The functional imaging results showed BA47 and left supplementary motor area and superior medial frontal gyrus more strongly activated for classifier processing, and the left posterior middle temporal gyrus more responsive to aspect marker processing. We attributed the activation in the left prefrontal cortex to greater processing complexity during classifier selection, analogous to the accounts put forth for European languages, and the left posterior middle temporal gyrus to more demanding verb semantic processing. The overall findings significantly contribute to cross-linguistic observations of neural substrates underlying processing of grammatical morphemes from an analytic and a classifier language, and thereby deepen our understanding of neurobiology of human language.
We report an individual with a massive left-hemisphere lesion, who showed reverse patterns of dissociations between word and number processing in two modalities (auditory comprehension and written production). His performance in auditory comprehension was perfect for words, but severely impaired for numbers. In written production, he performed significantly better at writing numbers (both Arabic numbers and word numbers) than writing words. His visual comprehension fell into normal range for words and numbers while his oral production was at floor for both. This case profile adds further evidence to the functional/neural segregation of word and number processing systems.
Numerous studies using various techniques and methodologies have demonstrated distinctive responses to nouns and verbs both at the behavioral and neurological levels. However, since the great majority of these studies involved tasks employing pictorial stimuli and languages with rich inflectional morphology, it is not clear whether word class effects resulted from semantic differences between objects and actions or different inflectional operations associated with the two word classes. Such shortcomings were addressed in this study by using a language with impoverished inflectional morphology - Chinese. Both concrete and abstract words were included, while controlling for nuisance variables between the two word classes, including imageability, word frequency, age-of-acquisition, and number of stroke. Participants were asked to judge the semantic relatedness of noun or verb pairs by pressing different buttons. The results revealed specific neural correlates for verb class in left lateral temporal and inferior frontal regions. Furthermore, the patterns of neural distribution of nouns and verbs were consistent with observations from Indo-European languages. Plausible accounts for neural separation of word classes were considered.
Two specific areas within the posterior lateral temporal cortex (PLTC), the posterior superior temporal sulcus (pSTS) and the posterior middle temporal gyrus (pMTG), have been proposed to store different types of conceptual properties of motion: the pSTS encodes knowledge of articulated, biological motion, and the pMTG encodes knowledge about unarticulated, mechanical motion. We examined this hypothesis by comparing activation patterns evoked by verbs denoting biological motion (e.g., walk), mechanical motion (e.g., rotate), and low-motion events (e.g., ferment). Classical noun categories with different motion types (animals, tools, and buildings) were also tested and compared with previous findings of the categorical effects in PLTC. Replicating previous findings of different types of nouns, we observed stronger activation for animals than tools in the pSTS and stronger activation for tools compared to other types of nouns in the pMTG. However, such motion-type specific activation patterns only partly extended to verbs. Whereas the pSTS showed preferences for biological-motion verbs, no region within the pMTG was sensitive to verbs denoting mechanical motion. We speculate that the pMTG preference for tools is driven by properties other than mechanical motion, such as strong mappings between the visual form and motor-related representations.
Neuropsychological and neuroimaging studies have indicated that motor knowledge is one potential dimension along which concepts are organized. Here we present further direct evidence for the effects of motor knowledge in accounting for categorical patterns across object domains (living vs. nonliving) and grammatical domains (nouns vs. verbs), as well as the integrity of other modality-specific knowledge (e.g., visual). We present a Chinese case, XRK, who suffered from semantic dementia with left temporal lobe atrophy. In naming and comprehension tasks, he performed better at nonliving items than at living items, and better at verbs than at nouns. Critically, multiple regression method revealed that these two categorical effects could be both accounted for by the charade rating, a continuous measurement of the significance of motor knowledge for a concept or a semantic feature. Furthermore, charade rating also predicted his performances on the generation frequency of semantic features of various modalities. These findings consolidate the significance of motor knowledge in conceptual organization and further highlights the interactions between different types of semantic knowledge.
In contrast with disorders of comprehension and spontaneous expression, conduction aphasia is characterized by poor repetition, which is a hallmark of the syndrome. There are many theories on the repetition impairment of conduction aphasia. The disconnection theory suggests that a damaged in the arcuate fasciculus, which connects Brocas and Wernickes area, is the cause of conduction aphasia. In this study, we examined the disconnection theory.
Chinese is a logographic language. Many of its psycholinguistic characteristics differ from those of alphabetic languages. These differences might be expected to entail a different pattern of neural activity underpinning Chinese language processing compared to the processing of alphabetic languages. The aim of the current study was to investigate neural language centers for processing Chinese language information in healthy Chinese speakers using magnetoencephalography (MEG). Overall, we aimed to elucidate language-specific and language-general characteristics of processing across different language scripts.
Various hypotheses about the role of the anterior temporal lobe (ATL) in language processing have been proposed. One hypothesis is that it binds the semantic/conceptual properties of words, functioning as a hub for linking modality-specific conceptual properties of objects. This hypothesis predicts that damage to ATL would give rise to impaired conceptual knowledge of all categories. A related school of hypotheses assumes that the left ATL is critical for lexical retrieval, with different sub-regions potentially important for different categories of items. We examined these hypotheses by studying a case of surgical resection of left ATL due to a low-grade glioma (LGG). Thorough language assessments performed four months after the operation revealed the following profile: the patient showed intact conceptual knowledge for all categories of items tested using both accuracy and response latency measures; he suffered from name retrieval deficits for proper names (people and place names) and artifacts (including tools), but showed no name retrieval difficulties for animate things. This pattern of results challenges both target hypotheses about the role of ATL in language processing tested here.
An important issue in visual word comprehension literature is whether or not semantic access is mediated by phonological processing. In this paper, we present a Chinese individual, YGA, who provides converging evidence to directly address this issue. YGA has sustained damage to the left posterior superior and middle temporal lobe, and shows difficulty in orally name pictures and reading printed words aloud. He makes phonological errors on these tasks and also semantic errors on picture naming, indicating a deficit at accessing the phonological representations for output. However, he is intact at understanding the meaning of visually presented words. Such a profile challenges the hypothesis that semantic access in reading is phonologically mediated and provides further evidence for the universal principle of direct semantic access in reading.
A recent debate in the language production literature concerns the influence of a words orthographic information on spoken word production and the extent to which this influence is modulated by task context. In the present study, Mandarin Chinese participants produced sets of words that shared orthography (O+P-), phonology (O-P+), or orthography and phonology (O+P+), or were unrelated (O-P-), in the context of a reading, associative naming, or picture naming task. Shared phonology yielded facilitation effects in all three tasks, but only in the reading task was this phonological effect modulated by shared orthography. Shared orthography by itself (O+P-) revealed inhibitory effects in reading, but not in associative naming or in picture naming. These results suggest that a words orthography information influences spoken word production only in tasks that rely heavily on orthographic information.
A recent hypothesis proposes that reading depends on writing in a logographic language - Chinese. We present a Chinese individual (HLD) with brain damage whose profile challenges this hypothesis. HLD was severely impaired in the whole process of writing. He could not access orthographic knowledge, had poor orthographic awareness, and was poor at delayed- and direct-copying tasks. Nevertheless, he was perfect at visual word-picture matching and read aloud tasks, indicating his intact ability to access both the semantics and phonology in reading. He was also able to distinguish between fine visual features of characters. We conclude that reading does not depend on writing, even in Chinese.
The oral spelling process for logographic languages such as Chinese is intrinsically different from alphabetic languages. In Chinese only a subset of orthographic components are pronounceable and their phonological identities (i.e., component names) do not always correspond to the sound of the whole characters. We show that such phonological identities can nevertheless be selectively preserved when visual-motoric compositions are lost. We report a Chinese right-handed dysgraphic individual with left temporal and occipital damage, MZG, who was severely impaired in writing Chinese characters but was able to orally spell the same characters using the names of pronounceable components. MZGs writing deficit arose at the level of processing that is dedicated to the retrieval of the shapes (allographics) of the writing components. Such patterns show that phonological identities of components are part of the orthographic representation of Chinese characters, and that dissociation between oral and written spelling modalities is universal across different script systems. The temporal and occipital lobes in the language-dominant hemisphere are possibly important regions for allographic conversion in writing.
In congenitally blind individuals, many regions of the brain that are typically heavily involved in visual processing are recruited for a variety of nonvisual sensory and cognitive tasks ( Rauschecker 1995; Pascual-Leone et al. 2005). This phenomenon-cross-modal plasticity-has been widely documented, but the principles that determine where and how cross-modal changes occur remain poorly understood ( Bavelier and Neville 2002). Here, we evaluate the hypothesis that cross-modal plasticity respects the type of computations performed by a region, even as it changes the modality of the inputs over which they are carried out ( Pascual-Leone and Hamilton 2001). We compared the fMRI signal in sighted and congenitally blind participants during proprioceptively guided reaching. We show that parietooccipital reach-related regions retain their functional role-encoding of the spatial position of the reach target-even as the dominant modality in this region changes from visual to nonvisual inputs. This suggests that the computational role of a region, independently of the processing modality, codetermines its potential cross-modal recruitment. Our findings demonstrate that preservation of functional properties can serve as a guiding principle for cross-modal plasticity even in visuomotor cortical regions, i.e. beyond the early visual cortex and other traditional visual areas.
Resting-state functional connectivity (RSFC) offers a novel approach to reveal the temporal synchronization of functionally related brain regions. Recent studies have identified several RSFCs whose strength was associated with reading competence in alphabetic languages. In the present study, we examined the role of intrinsic functional relations for reading a non-alphabetic language--Chinese--by correlating RSFC maps of nine Chinese reading-related seed regions and reaction time in the single-character reading task. We found that Chinese reading efficiency was positively correlated with the connection between left inferior occipital gyrus and left superior parietal lobule, between right posterior fusiform gyrus and right superior parietal lobule, and between left inferior temporal gyrus and left inferior parietal lobule. These results could not be attributed to inter-individual differences arising from the peripheral processes of the reading task such as visual input detection and articulation. The observed RSFC-reading correlation relationships are discussed in the framework of Chinese character reading, including visuospatial analyses and semantic/phonological processes.
This paper reports a conjunction analysis between semantic relatedness judgment and semantic associate generation of Chinese nouns and verbs with concrete or abstract meanings. The results revealed a verb-specific task-independent region in LpSTG&MTG, and task-dependent activation in a left frontal region in semantic judgment and the left SMG in semantic associate production. The observation of word class effects converged on Yu, Law, Han, Zhu, and Bi (2011), but contrasted with null findings in previous reports using a lexical decision task. While word class effects in the left posterior temporal cortices have been described in previous studies of languages with rich inflectional morphology, the significance of this study lies in its demonstration of the effects in these regions in a language known to have little inflectional morphology. In other words, differential neural responses to nouns and verbs can be observed without confounding from morphosyntactic operations or contrasts between actions and objects.
Conceptual processing is a crucial brain function for humans. Past research using neuropsychological and task-based functional brain-imaging paradigms indicates that widely distributed brain regions are related to conceptual processing. Here, we explore the potential contribution of intrinsic or spontaneous brain activity to conceptual processing by examining whether resting-state functional magnetic resonance imaging (rs-fMRI) signals can account for individual differences in the conceptual processing efficiencies of healthy individuals. We acquired rs-fMRI and behavioral data on object conceptual processing tasks. We found that the regional amplitude of spontaneous low-frequency fluctuations in the blood oxygen level-dependent signal in the left (posterior) middle temporal gyrus (LMTG) was highly correlated with participants semantic processing efficiency. Furthermore, the strength of the functional connectivity between the LMTG and a series of brain regions-the left inferior frontal gyrus, bilateral anterior temporal lobe, bilateral medial temporal lobe, posterior cingulate gyrus, and ventromedial and dorsomedial prefrontal cortices-also significantly predicted conceptual behavior. The regional amplitude of low-frequency fluctuations and functionally relevant connectivity strengths of LMTG together accounted for 74% of individual variance in object conceptual performance. This semantic network, with the LMTG as its core component, largely overlaps with the regions reported in previous conceptual/semantic task-based fMRI studies. We conclude that the intrinsic or spontaneous activity of the human brain reflects the processing efficiency of the semantic system.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.