Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
20 Related JoVE Articles!
The Crossmodal Congruency Task as a Means to Obtain an Objective Behavioral Measure in the Rubber Hand Illusion Paradigm
Institutions: Macquarie University, Macquarie University, Macquarie University.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm.
The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e.
same finger) on some trials and incongruent (i.e.
different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI.
The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Psychology, Behavior and Behavior Mechanisms, Psychological Phenomena and Processes, Behavioral Sciences, rubber hand illusion, crossmodal congruency task, crossmodal congruency effect, multisensory processing, body ownership, peripersonal space, clinical techniques
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g.
, learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Corticospinal Excitability Modulation During Action Observation
Institutions: Universita degli Studi di Padova.
This study used the transcranial magnetic stimulation/motor evoked potential (TMS/MEP) technique to pinpoint when the automatic tendency to mirror someone else's action becomes anticipatory simulation of a complementary act. TMS was delivered to the left primary motor cortex corresponding to the hand to induce the highest level of MEP activity from the abductor digiti minimi (ADM; the muscle serving little finger abduction) as well as the first dorsal interosseus (FDI; the muscle serving index finger flexion/extension) muscles. A neuronavigation system was used to maintain the position of the TMS coil, and electromyographic (EMG) activity was recorded from the right ADM and FDI muscles. Producing original data with regard to motor resonance, the combined TMS/MEP technique has taken research on the perception-action coupling mechanism a step further. Specifically, it has answered the questions of how and when observing another person's actions produces motor facilitation in an onlooker's corresponding muscles and in what way corticospinal excitability is modulated in social contexts.
Behavior, Issue 82, action observation, transcranial magnetic stimulation, motor evoked potentials, corticospinal excitability
Quantitative Assessment of Cortical Auditory-tactile Processing in Children with Disabilities
Institutions: Vanderbilt University, Vanderbilt University, Vanderbilt University.
Objective and easy measurement of sensory processing is extremely difficult in nonverbal or vulnerable pediatric patients. We developed a new methodology to quantitatively assess children's cortical processing of light touch, speech sounds and the multisensory processing of the 2 stimuli, without requiring active subject participation or causing children discomfort. To accomplish this we developed a dual channel, time and strength calibrated air puff stimulator that allows both tactile stimulation and sham control. We combined this with the use of event-related potential methodology to allow for high temporal resolution of signals from the primary and secondary somatosensory cortices as well as higher order processing. This methodology also allowed us to measure a multisensory response to auditory-tactile stimulation.
Behavior, Issue 83, somatosensory, event related potential, auditory-tactile, multisensory, cortical response, child
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g.
, food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera
) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides.
We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Development of a Virtual Reality Assessment of Everyday Living Skills
Institutions: NeuroCog Trials, Inc., Duke-NUS Graduate Medical Center, Duke University Medical Center, Fox Evaluation and Consulting, PLLC, University of Miami Miller School of Medicine.
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes. Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements. Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning. Current data do not support the recommendation of any single instrument for measurement of functional capacity. The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
Behavior, Issue 86, Virtual Reality, Cognitive Assessment, Functional Capacity, Computer Based Assessment, Schizophrenia, Neuropsychology, Aging, Dementia
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33
. To help improve this understanding, proton magnetic resonance spectroscopy (1
H-MRS) can be used as it allows the in vivo
quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41
. In fact, a recent study demonstrated that 1
H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34
. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1
H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31
. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89,
Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
Two-photon Calcium Imaging in Mice Navigating a Virtual Reality Environment
Institutions: Friedrich Miescher Institute for Biomedical Research, Max Planck Institute of Neurobiology, ETH Zurich.
In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior1-6
. Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.
Behavior, Issue 84, Two-photon imaging, Virtual Reality, mouse behavior, adeno-associated virus, genetically encoded calcium indicators
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
Adaptation of a Haptic Robot in a 3T fMRI
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1
with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2
, and nearly real-time analyses 3
. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4
allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1).
The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6
, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
Utilizing Transcranial Magnetic Stimulation to Study the Human Neuromuscular System
Institutions: Ohio University.
Transcranial magnetic stimulation (TMS) has been in use for more than 20 years 1
, and has grown exponentially in popularity over the past decade. While the use of TMS has expanded to the study of many systems and processes during this time, the original application and perhaps one of the most common uses of TMS involves studying the physiology, plasticity and function of the human neuromuscular system. Single pulse TMS applied to the motor cortex excites pyramidal neurons transsynaptically 2
(Figure 1) and results in a measurable electromyographic response that can be used to study and evaluate the integrity and excitability of the corticospinal tract in humans 3
. Additionally, recent advances in magnetic stimulation now allows for partitioning of cortical versus spinal excitability 4,5
. For example, paired-pulse TMS can be used to assess intracortical facilitatory and inhibitory properties by combining a conditioning stimulus and a test stimulus at different interstimulus intervals 3,4,6-8
. In this video article we will demonstrate the methodological and technical aspects of these techniques. Specifically, we will demonstrate single-pulse and paired-pulse TMS techniques as applied to the flexor carpi radialis (FCR) muscle as well as the erector spinae (ES) musculature. Our laboratory studies the FCR muscle as it is of interest to our research on the effects of wrist-hand cast immobilization on reduced muscle performance6,9
, and we study the ES muscles due to these muscles clinical relevance as it relates to low back pain8
. With this stated, we should note that TMS has been used to study many muscles of the hand, arm and legs, and should iterate that our demonstrations in the FCR and ES muscle groups are only selected examples of TMS being used to study the human neuromuscular system.
Medicine, Issue 59, neuroscience, muscle, electromyography, physiology, TMS, strength, motor control. sarcopenia, dynapenia, lumbar
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1
. Humans can do path integration based exclusively on visual 2-3
, auditory 4
, or inertial cues 5
. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7
. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5
. Movement through physical space therefore does not seem to be accurately represented by the brain.
Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3
for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9
with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed.
16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2
peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen.
Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
3D-Neuronavigation In Vivo Through a Patient's Brain During a Spontaneous Migraine Headache
Institutions: University of Michigan School of Dentistry, University of Michigan School of Dentistry, University of Michigan, University of Michigan, University of Michigan, University of Michigan.
A growing body of research, generated primarily from MRI-based studies, shows that migraine appears to occur, and possibly endure, due to the alteration of specific neural processes in the central nervous system. However, information is lacking on the molecular impact of these changes, especially on the endogenous opioid system during migraine headaches, and neuronavigation through these changes has never been done. This study aimed to investigate, using a novel 3D immersive and interactive neuronavigation (3D-IIN) approach, the endogenous µ-opioid transmission in the brain during a migraine headache attack in vivo
. This is arguably one of the most central neuromechanisms associated with pain regulation, affecting multiple elements of the pain experience and analgesia. A 36 year-old female, who has been suffering with migraine for 10 years, was scanned in the typical headache (ictal) and nonheadache (interictal) migraine phases using Positron Emission Tomography (PET) with the selective radiotracer [11
C]carfentanil, which allowed us to measure µ-opioid receptor availability in the brain (non-displaceable binding potential - µOR BPND
). The short-life radiotracer was produced by a cyclotron and chemical synthesis apparatus on campus located in close proximity to the imaging facility. Both PET scans, interictal and ictal, were scheduled during separate mid-late follicular phases of the patient's menstrual cycle. During the ictal PET session her spontaneous headache attack reached severe intensity levels; progressing to nausea and vomiting at the end of the scan session. There were reductions in µOR BPND
in the pain-modulatory regions of the endogenous µ-opioid system during the ictal phase, including the cingulate cortex, nucleus accumbens (NAcc), thalamus (Thal), and periaqueductal gray matter (PAG); indicating that µORs were already occupied by endogenous opioids released in response to the ongoing pain. To our knowledge, this is the first time that changes in µOR BPND
during a migraine headache attack have been neuronavigated using a novel 3D approach. This method allows for interactive research and educational exploration of a migraine attack in an actual patient's neuroimaging dataset.
Medicine, Issue 88, μ-opioid, opiate, migraine, headache, pain, Positron Emission Tomography, molecular neuroimaging, 3D, neuronavigation
Electrophysiological Measurements and Analysis of Nociception in Human Infants
Institutions: University College London, Great Ormond Street Hospital, University College Hospital, University of Oxford.
Pain is an unpleasant sensory and emotional experience. Since infants cannot verbally report their experiences, current methods of pain assessment are based on behavioural and physiological body reactions, such as crying, body movements or changes in facial expression. While these measures demonstrate that infants mount a response following noxious stimulation, they are limited: they are based on activation of subcortical somatic and autonomic motor pathways that may not be reliably linked to central sensory processing in the brain. Knowledge of how the central nervous system responds to noxious events could provide an insight to how nociceptive information and pain is processed in newborns.
The heel lancing procedure used to extract blood from hospitalised infants offers a unique opportunity to study pain in infancy. In this video we describe how electroencephalography (EEG) and electromyography (EMG) time-locked to this procedure can be used to investigate nociceptive activity in the brain and spinal cord.
This integrative approach to the measurement of infant pain has the potential to pave the way for an effective and sensitive clinical measurement tool.
Neuroscience, Issue 58, pain, infant, electrophysiology, human development
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data.
In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1
. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses.
In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2
. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
The use of Biofeedback in Clinical Virtual Reality: The INTREPID Project
Institutions: Istituto Auxologico Italiano, Università Cattolica del Sacro Cuore.
Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioral treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. The INTREPID project is aimed to implement a new instrument to treat anxiety-related disorders and to test its clinical efficacy in reducing anxiety-related symptoms. The innovation of this approach is the combination of virtual reality and biofeedback, so that the first one is directly modified by the output of the second one. In this way, the patient is made aware of his or her reactions through the modification of some features of the VR environment in real time. Using mental exercises the patient learns to control these physiological parameters and using the feedback provided by the virtual environment is able to gauge his or her success. The supplemental use of portable devices, such as PDA or smart-phones, allows the patient to perform at home, individually and autonomously, the same exercises experienced in therapist's office. The goal is to anchor the learned protocol in a real life context, so enhancing the patients' ability to deal with their symptoms. The expected result is a better and faster learning of relaxation techniques, and thus an increased effectiveness of the treatment if compared with traditional clinical protocols.
Neuroscience, Issue 33, virtual reality, biofeedback, generalized anxiety disorder, Intrepid, cybertherapy, cyberpsychology