JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Preattentive extraction of abstract auditory rules in speech sound stream: a mismatch negativity study using lexical tones.
Extraction of linguistically relevant auditory features is critical for speech comprehension in complex auditory environments, in which the relationships between acoustic stimuli are often abstract and constant while the stimuli per se are varying. These relationships are referred to as the abstract auditory rule in speech and have been investigated for their underlying neural mechanisms at an attentive stage. However, the issue of whether or not there is a sensory intelligence that enables one to automatically encode abstract auditory rules in speech at a preattentive stage has not yet been thoroughly addressed.
Authors: Johanna G. Barry, Melanie A. Ferguson, David R. Moore.
Published: 10-11-2010
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
16 Related JoVE Articles!
Play Button
Stimulating the Lip Motor Cortex with Transcranial Magnetic Stimulation
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Institutions: University of Oxford.
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
Behavior, Issue 88, electromyography, motor cortex, motor evoked potential, motor excitability, speech, repetitive TMS, rTMS, virtual lesion, transcranial magnetic stimulation
Play Button
Quantitative Assessment of Cortical Auditory-tactile Processing in Children with Disabilities
Authors: Nathalie L. Maitre, Alexandra P. Key.
Institutions: Vanderbilt University, Vanderbilt University, Vanderbilt University.
Objective and easy measurement of sensory processing is extremely difficult in nonverbal or vulnerable pediatric patients. We developed a new methodology to quantitatively assess children's cortical processing of light touch, speech sounds and the multisensory processing of the 2 stimuli, without requiring active subject participation or causing children discomfort. To accomplish this we developed a dual channel, time and strength calibrated air puff stimulator that allows both tactile stimulation and sham control. We combined this with the use of event-related potential methodology to allow for high temporal resolution of signals from the primary and secondary somatosensory cortices as well as higher order processing. This methodology also allowed us to measure a multisensory response to auditory-tactile stimulation.
Behavior, Issue 83, somatosensory, event related potential, auditory-tactile, multisensory, cortical response, child
Play Button
Optogenetic Stimulation of the Auditory Nerve
Authors: Victor H. Hernandez, Anna Gehrt, Zhizi Jing, Gerhard Hoch, Marcus Jeschke, Nicola Strenzke, Tobias Moser.
Institutions: University Medical Center Goettingen, University of Goettingen, University Medical Center Goettingen, University of Goettingen, University of Guanajuato.
Direct electrical stimulation of spiral ganglion neurons (SGNs) by cochlear implants (CIs) enables open speech comprehension in the majority of implanted deaf subjects1-6. Nonetheless, sound coding with current CIs has poor frequency and intensity resolution due to broad current spread from each electrode contact activating a large number of SGNs along the tonotopic axis of the cochlea7-9. Optical stimulation is proposed as an alternative to electrical stimulation that promises spatially more confined activation of SGNs and, hence, higher frequency resolution of coding. In recent years, direct infrared illumination of the cochlea has been used to evoke responses in the auditory nerve10. Nevertheless it requires higher energies than electrical stimulation10,11 and uncertainty remains as to the underlying mechanism12. Here we describe a method based on optogenetics to stimulate SGNs with low intensity blue light, using transgenic mice with neuronal expression of channelrhodopsin 2 (ChR2)13 or virus-mediated expression of the ChR2-variant CatCh14. We used micro-light emitting diodes (µLEDs) and fiber-coupled lasers to stimulate ChR2-expressing SGNs through a small artificial opening (cochleostomy) or the round window. We assayed the responses by scalp recordings of light-evoked potentials (optogenetic auditory brainstem response: oABR) or by microelectrode recordings from the auditory pathway and compared them with acoustic and electrical stimulation.
Neuroscience, Issue 92, hearing, cochlear implant, optogenetics, channelrhodopsin, optical stimulation, deafness
Play Button
A Lightweight, Headphones-based System for Manipulating Auditory Feedback in Songbirds
Authors: Lukas A. Hoffmann, Conor W. Kelly, David A. Nicholson, Samuel J. Sober.
Institutions: Emory University, Emory University, Emory University.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output. Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13. The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.
Neuroscience, Issue 69, Anatomy, Physiology, Zoology, Behavior, Songbird, psychophysics, auditory feedback, biology, sensorimotor learning
Play Button
P50 Sensory Gating in Infants
Authors: Anne Spencer Ross, Sharon Kay Hunter, Mark A Groth, Randal Glenn Ross.
Institutions: University of Colorado School of Medicine, Colorado State University.
Attentional deficits are common in a variety of neuropsychiatric disorders including attention deficit-hyperactivity disorder, autism, bipolar mood disorder, and schizophrenia. There has been increasing interest in the neurodevelopmental components of these attentional deficits; neurodevelopmental meaning that while the deficits become clinically prominent in childhood or adulthood, the deficits are the results of problems in brain development that begin in infancy or even prenatally. Despite this interest, there are few methods for assessing attention very early in infancy. This report focuses on one method, infant auditory P50 sensory gating. Attention has several components. One of the earliest components of attention, termed sensory gating, allows the brain to tune out repetitive, noninformative sensory information. Auditory P50 sensory gating refers to one task designed to measure sensory gating using changes in EEG. When identical auditory stimuli are presented 500 ms apart, the evoked response (change in the EEG associated with the processing of the click) to the second stimulus is generally reduced relative to the response to the first stimulus (i.e. the response is "gated"). When response to the second stimulus is not reduced, this is considered a poor sensory gating, is reflective of impaired cerebral inhibition, and is correlated with attentional deficits. Because the auditory P50 sensory gating task is passive, it is of potential utility in the study of young infants and may provide a window into the developmental time course of attentional deficits in a variety of neuropsychiatric disorders. The goal of this presentation is to describe the methodology for assessing infant auditory P50 sensory gating, a methodology adapted from those used in studies of adult populations.
Behavior, Issue 82, Child Development, Psychophysiology, Attention Deficit and Disruptive Behavior Disorders, Evoked Potentials, Auditory, auditory evoked potential, sensory gating, infant, attention, electrophysiology, infants, sensory gating, endophenotype, attention, P50
Play Button
Functional Imaging of Auditory Cortex in Adult Cats using High-field fMRI
Authors: Trecia A. Brown, Joseph S. Gati, Sarah M. Hughes, Pam L. Nixon, Ravi S. Menon, Stephen G. Lomber.
Institutions: University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario.
Current knowledge of sensory processing in the mammalian auditory system is mainly derived from electrophysiological studies in a variety of animal models, including monkeys, ferrets, bats, rodents, and cats. In order to draw suitable parallels between human and animal models of auditory function, it is important to establish a bridge between human functional imaging studies and animal electrophysiological studies. Functional magnetic resonance imaging (fMRI) is an established, minimally invasive method of measuring broad patterns of hemodynamic activity across different regions of the cerebral cortex. This technique is widely used to probe sensory function in the human brain, is a useful tool in linking studies of auditory processing in both humans and animals and has been successfully used to investigate auditory function in monkeys and rodents. The following protocol describes an experimental procedure for investigating auditory function in anesthetized adult cats by measuring stimulus-evoked hemodynamic changes in auditory cortex using fMRI. This method facilitates comparison of the hemodynamic responses across different models of auditory function thus leading to a better understanding of species-independent features of the mammalian auditory cortex.
Neuroscience, Issue 84, Central Nervous System, Ear, Animal Experimentation, Models, Animal, Functional Neuroimaging, Brain Mapping, Nervous System, Sense Organs, auditory cortex, BOLD signal change, hemodynamic response, hearing, acoustic stimuli
Play Button
Mapping the After-effects of Theta Burst Stimulation on the Human Auditory Cortex with Functional Imaging
Authors: Jamila Andoh, Robert J. Zatorre.
Institutions: McGill University .
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing1. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function 2. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions 3. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated 4. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS 5-7. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing. This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex 8,9, premotor cortex 10, primary somatosensory cortex 11,12 and language-related areas 13, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing. Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task 2. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e. decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS. Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.
Neuroscience, Issue 67, Physiology, Physics, Theta burst stimulation, functional magnetic resonance imaging, MRI, auditory cortex, frameless stereotaxy, sound, transcranial magnetic stimulation
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Play Button
Behavioral Determination of Stimulus Pair Discrimination of Auditory Acoustic and Electrical Stimuli Using a Classical Conditioning and Heart-rate Approach
Authors: Simeon J. Morgan, Antonio G. Paolini.
Institutions: La Trobe University.
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants1-3 and auditory midbrain implants4,5. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices6,7. Several techniques such as reward-based operant conditioning6-8, conditioned avoidance9-11, or classical fear conditioning12 have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
Neuroscience, Issue 64, Physiology, auditory, hearing, brainstem, stimulation, rat, abi
Play Button
Functional Magnetic Resonance Imaging (fMRI) with Auditory Stimulation in Songbirds
Authors: Lisbeth Van Ruijssevelt, Geert De Groof, Anne Van der Kant, Colline Poirier, Johan Van Audekerke, Marleen Verhoye, Annemie Van der Linden.
Institutions: University of Antwerp.
The neurobiology of birdsong, as a model for human speech, is a pronounced area of research in behavioral neuroscience. Whereas electrophysiology and molecular approaches allow the investigation of either different stimuli on few neurons, or one stimulus in large parts of the brain, blood oxygenation level dependent (BOLD) functional Magnetic Resonance Imaging (fMRI) allows combining both advantages, i.e. compare the neural activation induced by different stimuli in the entire brain at once. fMRI in songbirds is challenging because of the small size of their brains and because their bones and especially their skull comprise numerous air cavities, inducing important susceptibility artifacts. Gradient-echo (GE) BOLD fMRI has been successfully applied to songbirds 1-5 (for a review, see 6). These studies focused on the primary and secondary auditory brain areas, which are regions free of susceptibility artifacts. However, because processes of interest may occur beyond these regions, whole brain BOLD fMRI is required using an MRI sequence less susceptible to these artifacts. This can be achieved by using spin-echo (SE) BOLD fMRI 7,8 . In this article, we describe how to use this technique in zebra finches (Taeniopygia guttata), which are small songbirds with a bodyweight of 15-25 g extensively studied in behavioral neurosciences of birdsong. The main topic of fMRI studies on songbirds is song perception and song learning. The auditory nature of the stimuli combined with the weak BOLD sensitivity of SE (compared to GE) based fMRI sequences makes the implementation of this technique very challenging.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Medicine, Biophysics, Physiology, Anatomy, Functional MRI, fMRI, Magnetic Resonance Imaging, MRI, blood oxygenation level dependent fMRI, BOLD fMRI, Brain, Songbird, zebra finches, Taeniopygia guttata, Auditory Stimulation, stimuli, animal model, imaging
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
Play Button
A Fully Automated and Highly Versatile System for Testing Multi-cognitive Functions and Recording Neuronal Activities in Rodents
Authors: Weimin Zheng, Edgar A. Ycu.
Institutions: The Neurosciences Institute, San Diego, CA.
We have developed a fully automated system for operant behavior testing and neuronal activity recording by which multiple cognitive brain functions can be investigated in a single task sequence. The unique feature of this system is a custom-made, acoustically transparent chamber that eliminates many of the issues associated with auditory cue control in most commercially available chambers. The ease with which operant devices can be added or replaced makes this system quite versatile, allowing for the implementation of a variety of auditory, visual, and olfactory behavioral tasks. Automation of the system allows fine temporal (10 ms) control and precise time-stamping of each event in a predesigned behavioral sequence. When combined with a multi-channel electrophysiology recording system, multiple cognitive brain functions, such as motivation, attention, decision-making, patience, and rewards, can be examined sequentially or independently.
Neuroscience, Issue 63, auditory behavioral task, acoustic chamber, cognition test, multi-channel recording, electrophysiology, attention, motivation, decision, patience, rat, two-alternative choice pitch discrimination task, behavior
Play Button
A Low Cost Setup for Behavioral Audiometry in Rodents
Authors: Konstantin Tziridis, Sönke Ahlf, Holger Schulze.
Institutions: University of Erlangen-Nuremberg.
In auditory animal research it is crucial to have precise information about basic hearing parameters of the animal subjects that are involved in the experiments. Such parameters may be physiological response characteristics of the auditory pathway, e.g. via brainstem audiometry (BERA). But these methods allow only indirect and uncertain extrapolations about the auditory percept that corresponds to these physiological parameters. To assess the perceptual level of hearing, behavioral methods have to be used. A potential problem with the use of behavioral methods for the description of perception in animal models is the fact that most of these methods involve some kind of learning paradigm before the subjects can be behaviorally tested, e.g. animals may have to learn to press a lever in response to a sound. As these learning paradigms change perception itself 1,2 they consequently will influence any result about perception obtained with these methods and therefore have to be interpreted with caution. Exceptions are paradigms that make use of reflex responses, because here no learning paradigms have to be carried out prior to perceptual testing. One such reflex response is the acoustic startle response (ASR) that can highly reproducibly be elicited with unexpected loud sounds in naïve animals. This ASR in turn can be influenced by preceding sounds depending on the perceptibility of this preceding stimulus: Sounds well above hearing threshold will completely inhibit the amplitude of the ASR; sounds close to threshold will only slightly inhibit the ASR. This phenomenon is called pre-pulse inhibition (PPI) 3,4, and the amount of PPI on the ASR gradually depends on the perceptibility of the pre-pulse. PPI of the ASR is therefore well suited to determine behavioral audiograms in naïve, non-trained animals, to determine hearing impairments or even to detect possible subjective tinnitus percepts in these animals. In this paper we demonstrate the use of this method in a rodent model (cf. also ref. 5), the Mongolian gerbil (Meriones unguiculatus), which is a well know model species for startle response research within the normal human hearing range (e.g. 6).
Neuroscience, Issue 68, Physiology, Anatomy, Medicine, otolaryngology, behavior, auditory startle response, pre-pulse inhibition, audiogram, tinnitus, hearing loss
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.