JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Extending body space in immersive virtual reality: a very long arm illusion.
PLoS ONE
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
ABSTRACT
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm. The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI. The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.
20 Related JoVE Articles!
Play Button
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Authors: Michael Barnett-Cowan, Tobias Meilinger, Manuel Vidal, Harald Teufel, Heinrich H. Bülthoff.
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
3436
Play Button
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Authors: James J. Jun, André Longtin, Leonard Maler.
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
50962
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
3237
Play Button
Using an EEG-Based Brain-Computer Interface for Virtual Cursor Movement with BCI2000
Authors: J. Adam Wilson, Gerwin Schalk, Léo M. Walton, Justin C. Williams.
Institutions: University of Wisconsin-Madison, New York State Dept. of Health.
A brain-computer interface (BCI) functions by translating a neural signal, such as the electroencephalogram (EEG), into a signal that can be used to control a computer or other device. The amplitude of the EEG signals in selected frequency bins are measured and translated into a device command, in this case the horizontal and vertical velocity of a computer cursor. First, the EEG electrodes are applied to the user s scalp using a cap to record brain activity. Next, a calibration procedure is used to find the EEG electrodes and features that the user will learn to voluntarily modulate to use the BCI. In humans, the power in the mu (8-12 Hz) and beta (18-28 Hz) frequency bands decrease in amplitude during a real or imagined movement. These changes can be detected in the EEG in real-time, and used to control a BCI ([1],[2]). Therefore, during a screening test, the user is asked to make several different imagined movements with their hands and feet to determine the unique EEG features that change with the imagined movements. The results from this calibration will show the best channels to use, which are configured so that amplitude changes in the mu and beta frequency bands move the cursor either horizontally or vertically. In this experiment, the general purpose BCI system BCI2000 is used to control signal acquisition, signal processing, and feedback to the user [3].
Neuroscience, Issue 29, BCI, EEG, brain-computer interface, BCI2000
1319
Play Button
Stimulating the Lip Motor Cortex with Transcranial Magnetic Stimulation
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Institutions: University of Oxford.
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
Behavior, Issue 88, electromyography, motor cortex, motor evoked potential, motor excitability, speech, repetitive TMS, rTMS, virtual lesion, transcranial magnetic stimulation
51665
Play Button
Automated Interactive Video Playback for Studies of Animal Communication
Authors: Trisha Butkowski, Wei Yan, Aaron M. Gray, Rongfeng Cui, Machteld N. Verzijden, Gil G. Rosenthal.
Institutions: Texas A&M University (TAMU), Texas A&M University (TAMU).
Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.
Neuroscience, Issue 48, Computer animation, visual communication, mate choice, Xiphophorus birchmanni, tracking
2374
Play Button
Development of a Virtual Reality Assessment of Everyday Living Skills
Authors: Stacy A. Ruse, Vicki G. Davis, Alexandra S. Atkins, K. Ranga R. Krishnan, Kolleen H. Fox, Philip D. Harvey, Richard S.E. Keefe.
Institutions: NeuroCog Trials, Inc., Duke-NUS Graduate Medical Center, Duke University Medical Center, Fox Evaluation and Consulting, PLLC, University of Miami Miller School of Medicine.
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes.  Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements.  Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning.  Current data do not support the recommendation of any single instrument for measurement of functional capacity.  The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
Behavior, Issue 86, Virtual Reality, Cognitive Assessment, Functional Capacity, Computer Based Assessment, Schizophrenia, Neuropsychology, Aging, Dementia
51405
Play Button
3D-Neuronavigation In Vivo Through a Patient's Brain During a Spontaneous Migraine Headache
Authors: Alexandre F. DaSilva, Thiago D. Nascimento, Tiffany Love, Marcos F. DosSantos, Ilkka K. Martikainen, Chelsea M. Cummiford, Misty DeBoer, Sarah R. Lucas, MaryCatherine A. Bender, Robert A. Koeppe, Theodore Hall, Sean Petty, Eric Maslowski, Yolanda R. Smith, Jon-Kar Zubieta.
Institutions: University of Michigan School of Dentistry, University of Michigan School of Dentistry, University of Michigan, University of Michigan, University of Michigan, University of Michigan.
A growing body of research, generated primarily from MRI-based studies, shows that migraine appears to occur, and possibly endure, due to the alteration of specific neural processes in the central nervous system. However, information is lacking on the molecular impact of these changes, especially on the endogenous opioid system during migraine headaches, and neuronavigation through these changes has never been done. This study aimed to investigate, using a novel 3D immersive and interactive neuronavigation (3D-IIN) approach, the endogenous µ-opioid transmission in the brain during a migraine headache attack in vivo. This is arguably one of the most central neuromechanisms associated with pain regulation, affecting multiple elements of the pain experience and analgesia. A 36 year-old female, who has been suffering with migraine for 10 years, was scanned in the typical headache (ictal) and nonheadache (interictal) migraine phases using Positron Emission Tomography (PET) with the selective radiotracer [11C]carfentanil, which allowed us to measure µ-opioid receptor availability in the brain (non-displaceable binding potential - µOR BPND). The short-life radiotracer was produced by a cyclotron and chemical synthesis apparatus on campus located in close proximity to the imaging facility. Both PET scans, interictal and ictal, were scheduled during separate mid-late follicular phases of the patient's menstrual cycle. During the ictal PET session her spontaneous headache attack reached severe intensity levels; progressing to nausea and vomiting at the end of the scan session. There were reductions in µOR BPND in the pain-modulatory regions of the endogenous µ-opioid system during the ictal phase, including the cingulate cortex, nucleus accumbens (NAcc), thalamus (Thal), and periaqueductal gray matter (PAG); indicating that µORs were already occupied by endogenous opioids released in response to the ongoing pain. To our knowledge, this is the first time that changes in µOR BPND during a migraine headache attack have been neuronavigated using a novel 3D approach. This method allows for interactive research and educational exploration of a migraine attack in an actual patient's neuroimaging dataset.
Medicine, Issue 88, μ-opioid, opiate, migraine, headache, pain, Positron Emission Tomography, molecular neuroimaging, 3D, neuronavigation
50682
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
51735
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
51631
Play Button
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Authors: Mikhail Kislin, Ekaterina Mugantseva, Dmitry Molotkov, Natalia Kulesskaya, Stanislav Khirug, Ilya Kirilkin, Evgeny Pryazhnikov, Julia Kolikova, Dmytro Toptunov, Mikhail Yuryev, Rashid Giniatullin, Vootele Voikar, Claudio Rivera, Heikki Rauvala, Leonard Khiroug.
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g., learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
51869
Play Button
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Authors: Jillian Nguyen, Thomas V. Papathomas, Jay H. Ravaliya, Elizabeth B. Torres.
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
51422
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
50272
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
3007
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Two-photon Calcium Imaging in Mice Navigating a Virtual Reality Environment
Authors: Marcus Leinweber, Pawel Zmarz, Peter Buchmann, Paul Argast, Mark Hübener, Tobias Bonhoeffer, Georg B. Keller.
Institutions: Friedrich Miescher Institute for Biomedical Research, Max Planck Institute of Neurobiology, ETH Zurich.
In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior1-6. Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.
Behavior, Issue 84, Two-photon imaging, Virtual Reality, mouse behavior, adeno-associated virus, genetically encoded calcium indicators
50885
Play Button
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Authors: Nicole C. Huff, David J. Zielinski, Matthew E. Fecteau, Rachael Brady, Kevin S. LaBar.
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data. In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2 . Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
1993
Play Button
The use of Biofeedback in Clinical Virtual Reality: The INTREPID Project
Authors: Claudia Repetto, Alessandra Gorini, Cinzia Vigna, Davide Algeri, Federica Pallavicini, Giuseppe Riva.
Institutions: Istituto Auxologico Italiano, Università Cattolica del Sacro Cuore.
Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioral treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. The INTREPID project is aimed to implement a new instrument to treat anxiety-related disorders and to test its clinical efficacy in reducing anxiety-related symptoms. The innovation of this approach is the combination of virtual reality and biofeedback, so that the first one is directly modified by the output of the second one. In this way, the patient is made aware of his or her reactions through the modification of some features of the VR environment in real time. Using mental exercises the patient learns to control these physiological parameters and using the feedback provided by the virtual environment is able to gauge his or her success. The supplemental use of portable devices, such as PDA or smart-phones, allows the patient to perform at home, individually and autonomously, the same exercises experienced in therapist's office. The goal is to anchor the learned protocol in a real life context, so enhancing the patients' ability to deal with their symptoms. The expected result is a better and faster learning of relaxation techniques, and thus an increased effectiveness of the treatment if compared with traditional clinical protocols.
Neuroscience, Issue 33, virtual reality, biofeedback, generalized anxiety disorder, Intrepid, cybertherapy, cyberpsychology
1554
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.