JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
PUBLISHED: 01-01-2014
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Published: 06-14-2014
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
15 Related JoVE Articles!
Play Button
A Protocol for Comprehensive Assessment of Bulbar Dysfunction in Amyotrophic Lateral Sclerosis (ALS)
Authors: Yana Yunusova, Jordan R. Green, Jun Wang, Gary Pattee, Lorne Zinman.
Institutions: University of Toronto, Sunnybrook Health Science Centre, University of Nebraska-Lincoln, University of Nebraska Medical Center, University of Toronto.
Improved methods for assessing bulbar impairment are necessary for expediting diagnosis of bulbar dysfunction in ALS, for predicting disease progression across speech subsystems, and for addressing the critical need for sensitive outcome measures for ongoing experimental treatment trials. To address this need, we are obtaining longitudinal profiles of bulbar impairment in 100 individuals based on a comprehensive instrumentation-based assessment that yield objective measures. Using instrumental approaches to quantify speech-related behaviors is very important in a field that has primarily relied on subjective, auditory-perceptual forms of speech assessment1. Our assessment protocol measures performance across all of the speech subsystems, which include respiratory, phonatory (laryngeal), resonatory (velopharyngeal), and articulatory. The articulatory subsystem is divided into the facial components (jaw and lip), and the tongue. Prior research has suggested that each speech subsystem responds differently to neurological diseases such as ALS. The current protocol is designed to test the performance of each speech subsystem as independently from other subsystems as possible. The speech subsystems are evaluated in the context of more global changes to speech performance. These speech system level variables include speaking rate and intelligibility of speech. The protocol requires specialized instrumentation, and commercial and custom software. The respiratory, phonatory, and resonatory subsystems are evaluated using pressure-flow (aerodynamic) and acoustic methods. The articulatory subsystem is assessed using 3D motion tracking techniques. The objective measures that are used to quantify bulbar impairment have been well established in the speech literature and show sensitivity to changes in bulbar function with disease progression. The result of the assessment is a comprehensive, across-subsystem performance profile for each participant. The profile, when compared to the same measures obtained from healthy controls, is used for diagnostic purposes. Currently, we are testing the sensitivity and specificity of these measures for diagnosis of ALS and for predicting the rate of disease progression. In the long term, the more refined endophenotype of bulbar ALS derived from this work is expected to strengthen future efforts to identify the genetic loci of ALS and improve diagnostic and treatment specificity of the disease as a whole. The objective assessment that is demonstrated in this video may be used to assess a broad range of speech motor impairments, including those related to stroke, traumatic brain injury, multiple sclerosis, and Parkinson disease.
Medicine, Issue 48, speech, assessment, subsystems, bulbar function, amyotrophic lateral sclerosis
Play Button
Mapping the After-effects of Theta Burst Stimulation on the Human Auditory Cortex with Functional Imaging
Authors: Jamila Andoh, Robert J. Zatorre.
Institutions: McGill University .
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing1. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function 2. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions 3. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated 4. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS 5-7. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing. This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex 8,9, premotor cortex 10, primary somatosensory cortex 11,12 and language-related areas 13, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing. Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task 2. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e. decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS. Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.
Neuroscience, Issue 67, Physiology, Physics, Theta burst stimulation, functional magnetic resonance imaging, MRI, auditory cortex, frameless stereotaxy, sound, transcranial magnetic stimulation
Play Button
A Lightweight, Headphones-based System for Manipulating Auditory Feedback in Songbirds
Authors: Lukas A. Hoffmann, Conor W. Kelly, David A. Nicholson, Samuel J. Sober.
Institutions: Emory University, Emory University, Emory University.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output. Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13. The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.
Neuroscience, Issue 69, Anatomy, Physiology, Zoology, Behavior, Songbird, psychophysics, auditory feedback, biology, sensorimotor learning
Play Button
Quantitative Assessment of Cortical Auditory-tactile Processing in Children with Disabilities
Authors: Nathalie L. Maitre, Alexandra P. Key.
Institutions: Vanderbilt University, Vanderbilt University, Vanderbilt University.
Objective and easy measurement of sensory processing is extremely difficult in nonverbal or vulnerable pediatric patients. We developed a new methodology to quantitatively assess children's cortical processing of light touch, speech sounds and the multisensory processing of the 2 stimuli, without requiring active subject participation or causing children discomfort. To accomplish this we developed a dual channel, time and strength calibrated air puff stimulator that allows both tactile stimulation and sham control. We combined this with the use of event-related potential methodology to allow for high temporal resolution of signals from the primary and secondary somatosensory cortices as well as higher order processing. This methodology also allowed us to measure a multisensory response to auditory-tactile stimulation.
Behavior, Issue 83, somatosensory, event related potential, auditory-tactile, multisensory, cortical response, child
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
Play Button
Assessment of Cerebral Lateralization in Children using Functional Transcranial Doppler Ultrasound (fTCD)
Authors: Dorothy V. M. Bishop, Nicholas A. Badcock, Georgina Holt.
Institutions: University of Oxford.
There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1. Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2. Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3. The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4, but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6. Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen.
Neuroscience, Issue 43, functional transcranial Doppler ultrasound, cerebral lateralization, language, child
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Making Sense of Listening: The IMAP Test Battery
Authors: Johanna G. Barry, Melanie A. Ferguson, David R. Moore.
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Play Button
Extracting Visual Evoked Potentials from EEG Data Recorded During fMRI-guided Transcranial Magnetic Stimulation
Authors: Boaz Sadeh, Galit Yovel.
Institutions: Tel-Aviv University, Tel-Aviv University.
Transcranial Magnetic Stimulation (TMS) is an effective method for establishing a causal link between a cortical area and cognitive/neurophysiological effects. Specifically, by creating a transient interference with the normal activity of a target region and measuring changes in an electrophysiological signal, we can establish a causal link between the stimulated brain area or network and the electrophysiological signal that we record. If target brain areas are functionally defined with prior fMRI scan, TMS could be used to link the fMRI activations with evoked potentials recorded. However, conducting such experiments presents significant technical challenges given the high amplitude artifacts introduced into the EEG signal by the magnetic pulse, and the difficulty to successfully target areas that were functionally defined by fMRI. Here we describe a methodology for combining these three common tools: TMS, EEG, and fMRI. We explain how to guide the stimulator's coil to the desired target area using anatomical or functional MRI data, how to record EEG during concurrent TMS, how to design an ERP study suitable for EEG-TMS combination and how to extract reliable ERP from the recorded data. We will provide representative results from a previously published study, in which fMRI-guided TMS was used concurrently with EEG to show that the face-selective N1 and the body-selective N1 component of the ERP are associated with distinct neural networks in extrastriate cortex. This method allows us to combine the high spatial resolution of fMRI with the high temporal resolution of TMS and EEG and therefore obtain a comprehensive understanding of the neural basis of various cognitive processes.
Neuroscience, Issue 87, Transcranial Magnetic Stimulation, Neuroimaging, Neuronavigation, Visual Perception, Evoked Potentials, Electroencephalography, Event-related potential, fMRI, Combined Neuroimaging Methods, Face perception, Body Perception
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Optogenetic Stimulation of the Auditory Nerve
Authors: Victor H. Hernandez, Anna Gehrt, Zhizi Jing, Gerhard Hoch, Marcus Jeschke, Nicola Strenzke, Tobias Moser.
Institutions: University Medical Center Goettingen, University of Goettingen, University Medical Center Goettingen, University of Goettingen, University of Guanajuato.
Direct electrical stimulation of spiral ganglion neurons (SGNs) by cochlear implants (CIs) enables open speech comprehension in the majority of implanted deaf subjects1-6. Nonetheless, sound coding with current CIs has poor frequency and intensity resolution due to broad current spread from each electrode contact activating a large number of SGNs along the tonotopic axis of the cochlea7-9. Optical stimulation is proposed as an alternative to electrical stimulation that promises spatially more confined activation of SGNs and, hence, higher frequency resolution of coding. In recent years, direct infrared illumination of the cochlea has been used to evoke responses in the auditory nerve10. Nevertheless it requires higher energies than electrical stimulation10,11 and uncertainty remains as to the underlying mechanism12. Here we describe a method based on optogenetics to stimulate SGNs with low intensity blue light, using transgenic mice with neuronal expression of channelrhodopsin 2 (ChR2)13 or virus-mediated expression of the ChR2-variant CatCh14. We used micro-light emitting diodes (µLEDs) and fiber-coupled lasers to stimulate ChR2-expressing SGNs through a small artificial opening (cochleostomy) or the round window. We assayed the responses by scalp recordings of light-evoked potentials (optogenetic auditory brainstem response: oABR) or by microelectrode recordings from the auditory pathway and compared them with acoustic and electrical stimulation.
Neuroscience, Issue 92, hearing, cochlear implant, optogenetics, channelrhodopsin, optical stimulation, deafness
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.