Graphical information, such as illustrations, graphs, and diagrams, are an essential complement to text for conveying knowledge about the world. Although graphics can be communicated well via the visual modality, conveying this information via touch has proven to be challenging. The lack of easily comprehensible tactile graphics poses a problem for the blind. In this paper, we advance a hypothesis for the limited effectiveness of tactile graphics. The hypothesis contends that conventional graphics that rely upon embossings on two-dimensional surfaces do not allow the deployment of tactile exploratory procedures that are crucial for assessing global shape. Besides potentially accounting for some of the shortcomings of current approaches, this hypothesis also serves a prescriptive purpose by suggesting a different strategy for conveying graphical information via touch, one based on cutouts. We describe experiments demonstrating the greater effectiveness of this approach for conveying shape and identity information. These results hold the potential for creating more comprehensible tactile drawings for the visually impaired while also providing insights into shape estimation processes in the tactile modality.
In the so-called McGurk illusion, when the synchronized presentation of the visual stimulus /ga/ is paired with the auditory stimulus /ba/, people in general hear it as /da/. Multisensory integration processing underlying this illusion seems to occur within the Superior Temporal Sulcus (STS). Herein, we present evidence demonstrating that bilateral cathodal transcranial direct current stimulation (tDCS) of this area can decrease the McGurk illusion-type responses. Additionally, we show that the manipulation of this audio-visual integrated output occurs irrespective of the number of eye-fixations on the mouth of the speaker. Bilateral anodal tDCS of the Parietal Cortex also modulates the illusion, but in the opposite manner, inducing more illusion-type responses. This is the first demonstration of using non-invasive brain stimulation to modulate multisensory speech perception in an illusory context (i.e., both increasing and decreasing illusion-type responses to a verbal audio-visual integration task). These findings provide clear evidence that both the superior temporal and parietal areas contribute to multisensory integration processing related to speech perception. Specifically, STS seems fundamental for the temporal synchronization and integration of auditory and visual inputs. For its part, posterior parietal cortex (PPC) may adjust the arrival of incoming audio and visual information to STS thereby enhancing their interaction in this latter area.
To explain the biological foundations of art appreciation is to explain one of our species' distinctive traits. Previous neuroimaging and electrophysiological studies have pointed to the prefrontal and the parietal cortex as two critical regions mediating esthetic appreciation of visual art. In this study, we applied transcranial magnetic stimulation (TMS) over the left prefrontal cortex and the right posterior parietal cortex while participants were evaluating whether they liked, and by how much, a particular painting. By depolarizing cell membranes in the targeted regions, TMS transiently interferes with the activity of specific cortical areas, which allows clarifying their role in a given task. Our results show that both regions play a fundamental role in mediating esthetic appreciation. Critically though, the effects of TMS varied depending on the type of art considered (i.e. representational vs. abstract) and on participants' a-priori inclination toward one or the other.
The International Neuromodulation Society (INS) has identified a need for evaluation and analysis of the practice of neurostimulation of the brain and extracranial nerves of the head to treat chronic pain.
Symmetry is an organizational principle that is ubiquitous throughout the visual world. However, this property can also be detected through non-visual modalities such as touch. The role of prior visual experience on detecting tactile patterns containing symmetry remains unclear. We compared the behavioral performance of early blind and sighted (blindfolded) controls on a tactile symmetry detection task. The tactile patterns used were similar in design and complexity as in previous visual perceptual studies. The neural correlates associated with this behavioral task were identified with functional magnetic resonance imaging (fMRI). In line with growing evidence demonstrating enhanced tactile processing abilities in the blind, we found that early blind individuals showed significantly superior performance in detecting tactile symmetric patterns compared to sighted controls. Furthermore, comparing patterns of activation between these two groups identified common areas of activation (e.g. superior parietal cortex) but key differences also emerged. In particular, tactile symmetry detection in the early blind was also associated with activation that included peri-calcarine cortex, lateral occipital (LO), and middle temporal (MT) cortex, as well as inferior temporal and fusiform cortex. These results contribute to the growing evidence supporting superior behavioral abilities in the blind, and the neural correlates associated with crossmodal neuroplasticity following visual deprivation.
Cortical (cerebral) visual impairment (CVI) is characterized by visual dysfunction associated with damage to the optic radiations and/or visual cortex. Typically it results from pre- or perinatal hypoxic damage to postchiasmal visual structures and pathways. The neuroanatomical basis of this condition remains poorly understood, particularly with regard to how the resulting maldevelopment of visual processing pathways relates to observations in the clinical setting. We report our investigation of 2 young adults diagnosed with CVI and visual dysfunction characterized by difficulties related to visually guided attention and visuospatial processing. Using high-angular-resolution diffusion imaging (HARDI), we characterized and compared their individual white matter projections of the extrageniculo-striate visual system with a normal-sighted control. Compared to a sighted control, both CVI cases revealed a striking reduction in association fibers, including the inferior frontal-occipital fasciculus as well as superior and inferior longitudinal fasciculi. This reduction in fibers associated with the major pathways implicated in visual processing may provide a neuroanatomical basis for the visual dysfunctions observed in these patients.
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation.
Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and a horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the Spatial-Musical Association of Response Codes (SMARC) effect), with the strength of these associations depending on individuals' musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal) rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization.
For individuals who are blind, navigating independently in an unfamiliar environment represents a considerable challenge. Inspired by the rising popularity of video games, we have developed a novel approach to train navigation and spatial cognition skills in adolescents who are blind. Audio-based Environment Simulator (AbES) is a software application that allows for the virtual exploration of an existing building set in an action video game metaphor. Using this ludic-based approach to learning, we investigated the ability and efficacy of adolescents with early onset blindness to acquire spatial information gained from the exploration of a target virtual indoor environment. Following game play, participants were assessed on their ability to transfer and mentally manipulate acquired spatial information on a set of navigation tasks carried out in the real environment. Success in transfer of navigation skill performance was markedly high suggesting that interacting with AbES leads to the generation of an accurate spatial mental representation. Furthermore, there was a positive correlation between success in game play and navigation task performance. The role of virtual environments and gaming in the development of mental spatial representations is also discussed. We conclude that this game based learning approach can facilitate the transfer of spatial knowledge and further, can be used by individuals who are blind for the purposes of navigation in real-world environments.
Visual stimuli that exhibit vertical symmetry are easier to remember than stimuli symmetric along other axes, an advantage that extends to the haptic modality as well. Critically, the vertical symmetry memory advantage has not been found in early blind individuals, despite their overall superior memory, as compared with sighted individuals, and the presence of an overall advantage for identifying symmetric over asymmetric patterns. The absence of the vertical axis memory advantage in the early blind may depend on their total lack of visual experience or on the effect of prolonged visual deprivation. To disentangle this issue, in this study, we measured the ability of late blind individuals to remember tactile spatial patterns that were either vertically or horizontally symmetric or asymmetric. Late blind participants showed better memory performance for symmetric patterns. An additional advantage for the vertical axis of symmetry over the horizontal one was reported, but only for patterns presented in the frontal plane. In the horizontal plane, no difference was observed between vertical and horizontal symmetric patterns, due to the latter being recalled particularly well. These results are discussed in terms of the influence of the spatial reference frame adopted during exploration. Overall, our data suggest that prior visual experience is sufficient to drive the vertical symmetry memory advantage, at least when an external reference frame based on geocentric cues (i.e., gravity) is adopted.
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a buildings layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning.
The ability to identify faces is of critical importance for normal social interactions. Previous evidence suggests that early visual deprivation may impair certain aspects of face recognition. The effects of strabismic amblyopia on face processing have not been investigated previously. In this study, a group of individuals with amblyopia were administered two tasks known to selectively measure face detection based on a Gestalt representation of a face (Mooney faces task) and featural and relational processing of faces (Jane faces task). Our data show that--when relying on their amblyopic eye only - strabismic amblyopes perform as well as normally sighted individuals in face detection and recognition on the basis of their single features. However, they are significantly impaired in discriminating among different faces on the basis of the spacing of their single features (i.e., configural processing of relational information). Our findings are the first to demonstrate that strabismic amblyopia may cause specific deficits in face recognition, and add to previous reports characterizing visual perceptual deficits associated in amblyopia as high-level and not only as low-level processing.
Transcutaneous electrical stimulation has been proven to modulate nervous system activity, leading to changes in pain perception, via the peripheral sensory system, in a bottom up approach. We tested whether different sensory behavioral tasks induce significant effects in pain processing and whether these changes correlate with cortical plasticity.
Once the topic of folklore and science fiction, the notion of restoring vision to the blind is now approaching a tractable reality. Technological advances have inspired numerous multidisciplinary groups worldwide to develop visual neuroprosthetic devices that could potentially provide useful vision and improve the quality of life of profoundly blind individuals. While a variety of approaches and designs are being pursued, they all share a common principle of creating visual percepts through the stimulation of visual neural elements using appropriate patterns of electrical stimulation. Human clinical trials are now well underway and initial results have been met with a balance of excitement and cautious optimism. As remaining technical and surgical challenges continue to be solved and clinical trials move forward, we now enter a phase of development that requires careful consideration of a new set of issues. Establishing appropriate patient selection criteria, methods of evaluating long-term performance and effectiveness, and strategies to rehabilitate implanted patients will all need to be considered in order to achieve optimal outcomes and establish these devices as viable therapeutic options.
To standardize a protocol for promoting visual rehabilitative outcomes in post-stroke hemianopia by combining occipital cortical transcranial direct current stimulation (tDCS) with Vision Restoration Therapy (VRT).
Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity, direct current to cortical areas facilitating or inhibiting spontaneous neuronal activity. In the past 10 years, tDCS physiologic mechanisms of action have been intensively investigated giving support for the investigation of its applications in clinical neuropsychiatry and rehabilitation. However, new methodologic, ethical, and regulatory issues emerge when translating the findings of preclinical and phase I studies into phase II and III clinical studies. The aim of this comprehensive review is to discuss the key challenges of this process and possible methods to address them.
Multisensory integration of information from different sensory modalities is an essential component of perception. Neurophysiological studies have revealed that audiovisual interactions occur early in time and even within sensory cortical areas believed to be modality-specific. Here we investigated the effect of auditory stimuli on visual perception of phosphenes induced by transcranial magnetic stimulation (TMS) delivered to the occipital visual cortex. TMS applied at subthreshold intensity led to the perception of phosphenes when coupled with an auditory stimulus presented within close spatiotemporal congruency at the expected retinotopic location of the phosphene percept. The effect was maximal when the auditory stimulus preceded the occipital TMS pulse by 40 ms. Follow-up experiments confirmed a high degree of temporal and spatial specificity of this facilitatory effect. Furthermore, audiovisual facilitation was only present at subthreshold TMS intensity for the phosphenes, suggesting that suboptimal levels of excitability within unisensory cortices may be better suited for enhanced crossmodal interactions. Overall, our findings reveal early auditory-visual interactions due to the enhancement of visual cortical excitability by auditory stimuli. These interactions may reflect an underlying anatomical connectivity between unisensory cortices.
A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas.
There is growing evidence that sensory deprivation is associated with crossmodal neuroplastic changes in the brain. After visual or auditory deprivation, brain areas that are normally associated with the lost sense are recruited by spared sensory modalities. These changes underlie adaptive and compensatory behaviours in blind and deaf individuals. Although there are differences between these populations owing to the nature of the deprived sensory modality, there seem to be common principles regarding how the brain copes with sensory loss and the factors that influence neuroplastic changes. Here, we discuss crossmodal neuroplasticity with regards to behavioural adaptation after sensory deprivation and highlight the possibility of maladaptive consequences within the context of rehabilitation.
Visual field defects often result from stroke and brain injury. The resulting visual impairment can be debilitating for patients, impeding daily activities such as reading and mobility. Historically, it was believed that there was little opportunity for restoration of function following visual system damage. However, the development of various visual rehabilitative strategies suggests that visual field defects are partially repairable and a certain degree of function can be improved. While this provides hope for patients, many of these strategies have been met with skepticism within the clinical and scientific communities. Further development of these strategies through carefully designed studies could validate their efficacy and reveal underlying mechanisms. Novel techniques, aimed at enhancing the effect of these rehabilitative strategies, are also discussed.
Individuals using a visual-to-auditory sensory substitution device (SSD) called The vOICe can identify objects in their environment through images encoded by sound. We have shown that identifying objects with this SSD is associated with activation of occipital visual areas. Here, we show that repetitive transcranial magnetic stimulation (rTMS) delivered to a specific area of occipital cortex (identified by functional MRI) profoundly impairs a blind users ability to identify objects. rTMS delivered to the same site had no effect on a visual imagery task. The task and site-specific disruptive effect of rTMS in this individual suggests that the cross-modal recruitment of occipital visual areas is functional in nature and critical to the patients ability to process and decode the image sounds using this SSD.
Current neuropsychological evidence demonstrates that damage to sensory-specific and heteromodal areas of the brain not only disrupts the ability of combining sensory information from multiple sources, but can also cause altered multisensory experiences. On the other hand, there is also evidence of behavioural benefits induced by spared multisensory mechanisms. Thus, crossmodal plasticity can be viewed in both an adaptive and maladaptive context. The emerging view is that different crossmodal plastic changes can result following damage to sensory-specific and heteromodal areas, with post-injury crossmodal plasticity representing an attempt of a multisensory system to reconnect the various senses and by-pass injured areas. Changes can be considered adaptive when there is compensation for the lesion-induced sensory impairment. Conversely, it may prove maladaptive when atypical or even illusory multisensory experiences are generated as a result of re-arranged multisensory networks. This theoretical framework posits new intriguing questions for neuropsychological research and places greater emphasis on the study of multisensory phenomena within the context of damage to large-scale brain networks, rather than just focal damage alone.
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.
Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity currents facilitating or inhibiting spontaneous neuronal activity. tDCS is attractive since dose is readily adjustable by simply changing electrode number, position, size, shape, and current. In the recent past, computational models have been developed with increased precision with the goal to help customize tDCS dose. The aim of this review is to discuss the incorporation of high-resolution patient-specific computer modeling to guide and optimize tDCS.
We have previously reported that transcranial direct current stimulation (tDCS) delivered to the occipital cortex enhances visual functional recovery when combined with three months of computer-based rehabilitative training in patients with hemianopia. The principal objective of this study was to evaluate the temporal sequence of effects of tDCS on visual recovery as they appear over the course of training and across different indicators of visual function.
Vision Restoration Therapy (VRT) aims to improve visual field function by systematically training regions of residual vision associated with the activity of suboptimal firing neurons within the occipital cortex. Transcranial direct current stimulation (tDCS) has been shown to modulate cortical excitability.
The human capacity to discriminate among different faces relies on distinct parallel subprocesses, based either on the analysis of configural aspects or on the sequential analysis of the single elements of a face. A particular type of configural processing consists of considering whether two faces differ in terms of internal spacing among their features, referred to as second-order relations processing. Findings from electrophysiological, neuroimaging, and lesion studies suggest that, overall, configural processes rely more on the right hemisphere, whereas analysis of single features would involve more the left. However, results are not always consistent, and behavioral evidence for a right-hemisphere specialization in second-order relations processing is lacking. Here, we used divided visual field presentation to investigate the possible different contributions of the two hemispheres to face discrimination based on relational versus featural processing. Our data indicate a right-hemispheric specialization in relational processing of upright (but not inverted) faces. Furthermore, we provide evidence regarding the involvement of both the right and left hemispheres in the processing of faces differing for inner features, suggesting that both analytical and configural modes of processing are at play.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.