JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Virtual reality therapy for adults post-stroke: a systematic review and meta-analysis exploring virtual environments and commercial games in therapy.
PUBLISHED: 01-01-2014
The objective of this analysis was to systematically review the evidence for virtual reality (VR) therapy in an adult post-stroke population in both custom built virtual environments (VE) and commercially available gaming systems (CG).
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Published: 03-27-2013
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
18 Related JoVE Articles!
Play Button
Development of a Virtual Reality Assessment of Everyday Living Skills
Authors: Stacy A. Ruse, Vicki G. Davis, Alexandra S. Atkins, K. Ranga R. Krishnan, Kolleen H. Fox, Philip D. Harvey, Richard S.E. Keefe.
Institutions: NeuroCog Trials, Inc., Duke-NUS Graduate Medical Center, Duke University Medical Center, Fox Evaluation and Consulting, PLLC, University of Miami Miller School of Medicine.
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes.  Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements.  Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning.  Current data do not support the recommendation of any single instrument for measurement of functional capacity.  The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
Behavior, Issue 86, Virtual Reality, Cognitive Assessment, Functional Capacity, Computer Based Assessment, Schizophrenia, Neuropsychology, Aging, Dementia
Play Button
Two-photon Calcium Imaging in Mice Navigating a Virtual Reality Environment
Authors: Marcus Leinweber, Pawel Zmarz, Peter Buchmann, Paul Argast, Mark Hübener, Tobias Bonhoeffer, Georg B. Keller.
Institutions: Friedrich Miescher Institute for Biomedical Research, Max Planck Institute of Neurobiology, ETH Zurich.
In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior1-6. Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.
Behavior, Issue 84, Two-photon imaging, Virtual Reality, mouse behavior, adeno-associated virus, genetically encoded calcium indicators
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
Play Button
A Novel Technique of Rescuing Capsulorhexis Radial Tear-out using a Cystotome
Authors: Shah M. R. Karim, Chin T. Ong, Mizanur R. Miah, Tamsin Sleep, Abdul Hanifudin.
Institutions: Hairmyres Hospital, NHS Lanarkshire, Royal Devon and Exeter NHS Foundation Trust, National Institute of Ophthalmology, South Devon Healthcare NHS Trust.
Part 1 : Purpose: To demonstrate a capsulorhexis radial tear out rescue technique using a cystotome on a virtual reality cataract surgery simulator and in a human eye. Part 2 : Method: Steps: When a capsulorhexis begins to veer radially towards the periphery beyond the pupillary margin the following steps should be applied without delay. 2.1) Stop further capsulorhexis manoeuvre and reassess the situation. 2.2) Fill the anterior chamber with ophthalmic viscosurgical device (OVD). We recommend mounting the cystotome to a syringe containing OVD so that the anterior chamber can be reinflated rapidly. 2.3) The capsulorhexis flap is then left unfolded on the lens surface. 2.4) The cystotome tip is tilted horizontally to avoid cutting or puncturing the flap and is engaged on the flap near the leading edge of the tear but not too close to the point of tear. 2.5) Gently push or pull the leading edge of tear opposite to the direction of tear. 2.6) The leading tearing edge will start to do a 'U-Turn'. Maintain the tension on the flap until the tearing edge returns to the desired trajectory. Part 3 : Results: Using our technique, a surgeon can respond instantly to radial tear out without having to change surgical instruments. Changing surgical instruments at this critical stage runs a risk of further radial tear due to sudden shallowing of anterior chamber as a result of forward pressure from the vitreous. Our technique also has the advantage of reducing corneal wound distortion and subsequent anterior chamber collapse. Part 4 : Discussion The EYESI Surgical Simulator is a realistic training platform for surgeons to practice complex capsulorhexis tear-out techniques. Capsulorhexis is the most important and complex part of phacoemulsification and endocapsular intraocular lens implantation procedure. A successful cataract surgery depends on achieving a good capsulorhexis. During capsulorhexis, surgeons may face a challenging situation like a capsulorhexis radial tear-out. A surgeon must learn to tackle the problem promptly without making the situation worse. Some other methods of rescuing the situation have been described using a capsulorhexis forceps. However, we believe our method is quicker, more effective and easier to manipulate as demonstrated on the EYESi surgical simulator and on a human eye. Acknowledgments: List acknowledgements and funding sources. We would like to thank Dr. Wael El Gendy, for video clip. Disclosures: describe potential conflicting interests or state We have nothing to disclose. References: 1. Brian C. Little, Jennifer H. Smith, Mark Packer. J Cataract Refract Surg 2006; 32:1420 1422, Issue-9. 2. Neuhann T. Theorie und Operationstechnik der Kapsulorhexis. Klin Monatsbl Augenheilkd. 1987; 1990: 542-545. 3. Gimbel HV, Neuhann T. Development, advantages and methods of the continuous circular capsulorhexis technique. J Cataract Refract Surg. 1990; 16: 31-37. 4. Gimbel HV, Neuhann T. Continuous curvilinear capsulorhexis. (letter) J Cataract Refract Sur. 1991; 17: 110-111.
Medicine, Issue 47, Phacoemulsification surgery, cataract surgery, capsulorhexis, capsulotomy, technique, Continuous curvilinear capsulorhexis, cystotome, capsulorhexis radial tear, capulorhexis COMPLICATION
Play Button
A Novel Capsulorhexis Technique Using Shearing Forces with Cystotome
Authors: Shah M. R. Karim, Chin T. Ong, Tamsin J. Sleep.
Institutions: Hairmyres Hospital, NHS Lanarkshire, Department of Ophthalmology, South Devon Healthcare NHS Trust.
Purpose: To demonstrate a capsulorhexis technique using predominantly shearing forces with a cystotome on a virtual reality simulator and on a human eye. Method: Our technique involves creating the initial anterior capsular tear with a cystotome to raise a flap. The flap left unfolded on the lens surface. The cystotome tip is tilted horizontally and is engaged on the flap near the leading edge of the tear. The cystotome is moved in a circular fashion to direct the vector forces. The loose flap is constantly swept towards the centre so that it does not obscure the view on the tearing edge. Results: Our technique has the advantage of reducing corneal wound distortion and subsequent anterior chamber collapse. The capsulorhexis flap is moved away from the tear leading edge allowing better visualisation of the direction of tear. This technique offers superior control of the capsulorhexis by allowing the surgeon to change the direction of the tear to achieve the desired capsulorhexis size. Conclusions: The EYESI Surgical Simulator is a realistic training platform for surgeons to practice complex capsulorhexis techniques. The shearing forces technique is a suitable alternative and in some cases a far better technique in achieving the desired capsulorhexis.
JoVE Medicine, Issue 39, Phacoemulsification surgery, cataract surgery, capsulorhexis, capsulotomy, technique, Continuous curvilinear capsulorhexis, cystotome
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Video-rate Scanning Confocal Microscopy and Microendoscopy
Authors: Alexander J. Nichols, Conor L. Evans.
Institutions: Harvard University , Harvard-MIT, Harvard Medical School.
Confocal microscopy has become an invaluable tool in biology and the biomedical sciences, enabling rapid, high-sensitivity, and high-resolution optical sectioning of complex systems. Confocal microscopy is routinely used, for example, to study specific cellular targets1, monitor dynamics in living cells2-4, and visualize the three dimensional evolution of entire organisms5,6. Extensions of confocal imaging systems, such as confocal microendoscopes, allow for high-resolution imaging in vivo7 and are currently being applied to disease imaging and diagnosis in clinical settings8,9. Confocal microscopy provides three-dimensional resolution by creating so-called "optical sections" using straightforward geometrical optics. In a standard wide-field microscope, fluorescence generated from a sample is collected by an objective lens and relayed directly to a detector. While acceptable for imaging thin samples, thick samples become blurred by fluorescence generated above and below the objective focal plane. In contrast, confocal microscopy enables virtual, optical sectioning of samples, rejecting out-of-focus light to build high resolution three-dimensional representations of samples. Confocal microscopes achieve this feat by using a confocal aperture in the detection beam path. The fluorescence collected from a sample by the objective is relayed back through the scanning mirrors and through the primary dichroic mirror, a mirror carefully selected to reflect shorter wavelengths such as the laser excitation beam while passing the longer, Stokes-shifted fluorescence emission. This long-wavelength fluorescence signal is then passed to a pair of lenses on either side of a pinhole that is positioned at a plane exactly conjugate with the focal plane of the objective lens. Photons collected from the focal volume of the object are collimated by the objective lens and are focused by the confocal lenses through the pinhole. Fluorescence generated above or below the focal plane will therefore not be collimated properly, and will not pass through the confocal pinhole1, creating an optical section in which only light from the microscope focus is visible. (Fig 1). Thus the pinhole effectively acts as a virtual aperture in the focal plane, confining the detected emission to only one limited spatial location. Modern commercial confocal microscopes offer users fully automated operation, making formerly complex imaging procedures relatively straightforward and accessible. Despite the flexibility and power of these systems, commercial confocal microscopes are not well suited for all confocal imaging tasks, such as many in vivo imaging applications. Without the ability to create customized imaging systems to meet their needs, important experiments can remain out of reach to many scientists. In this article, we provide a step-by-step method for the complete construction of a custom, video-rate confocal imaging system from basic components. The upright microscope will be constructed using a resonant galvanometric mirror to provide the fast scanning axis, while a standard speed resonant galvanometric mirror will scan the slow axis. To create a precise scanned beam in the objective lens focus, these mirrors will be positioned at the so-called telecentric planes using four relay lenses. Confocal detection will be accomplished using a standard, off-the-shelf photomultiplier tube (PMT), and the images will be captured and displayed using a Matrox framegrabber card and the included software.
Bioengineering, Issue 56, Microscopy, confocal microscopy, microendoscopy, video-rate, fluorescence, scanning, in vivo imaging
Play Button
3D-Neuronavigation In Vivo Through a Patient's Brain During a Spontaneous Migraine Headache
Authors: Alexandre F. DaSilva, Thiago D. Nascimento, Tiffany Love, Marcos F. DosSantos, Ilkka K. Martikainen, Chelsea M. Cummiford, Misty DeBoer, Sarah R. Lucas, MaryCatherine A. Bender, Robert A. Koeppe, Theodore Hall, Sean Petty, Eric Maslowski, Yolanda R. Smith, Jon-Kar Zubieta.
Institutions: University of Michigan School of Dentistry, University of Michigan School of Dentistry, University of Michigan, University of Michigan, University of Michigan, University of Michigan.
A growing body of research, generated primarily from MRI-based studies, shows that migraine appears to occur, and possibly endure, due to the alteration of specific neural processes in the central nervous system. However, information is lacking on the molecular impact of these changes, especially on the endogenous opioid system during migraine headaches, and neuronavigation through these changes has never been done. This study aimed to investigate, using a novel 3D immersive and interactive neuronavigation (3D-IIN) approach, the endogenous µ-opioid transmission in the brain during a migraine headache attack in vivo. This is arguably one of the most central neuromechanisms associated with pain regulation, affecting multiple elements of the pain experience and analgesia. A 36 year-old female, who has been suffering with migraine for 10 years, was scanned in the typical headache (ictal) and nonheadache (interictal) migraine phases using Positron Emission Tomography (PET) with the selective radiotracer [11C]carfentanil, which allowed us to measure µ-opioid receptor availability in the brain (non-displaceable binding potential - µOR BPND). The short-life radiotracer was produced by a cyclotron and chemical synthesis apparatus on campus located in close proximity to the imaging facility. Both PET scans, interictal and ictal, were scheduled during separate mid-late follicular phases of the patient's menstrual cycle. During the ictal PET session her spontaneous headache attack reached severe intensity levels; progressing to nausea and vomiting at the end of the scan session. There were reductions in µOR BPND in the pain-modulatory regions of the endogenous µ-opioid system during the ictal phase, including the cingulate cortex, nucleus accumbens (NAcc), thalamus (Thal), and periaqueductal gray matter (PAG); indicating that µORs were already occupied by endogenous opioids released in response to the ongoing pain. To our knowledge, this is the first time that changes in µOR BPND during a migraine headache attack have been neuronavigated using a novel 3D approach. This method allows for interactive research and educational exploration of a migraine attack in an actual patient's neuroimaging dataset.
Medicine, Issue 88, μ-opioid, opiate, migraine, headache, pain, Positron Emission Tomography, molecular neuroimaging, 3D, neuronavigation
Play Button
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Authors: Mikhail Kislin, Ekaterina Mugantseva, Dmitry Molotkov, Natalia Kulesskaya, Stanislav Khirug, Ilya Kirilkin, Evgeny Pryazhnikov, Julia Kolikova, Dmytro Toptunov, Mikhail Yuryev, Rashid Giniatullin, Vootele Voikar, Claudio Rivera, Heikki Rauvala, Leonard Khiroug.
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g., learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
The Measurement and Treatment of Suppression in Amblyopia
Authors: Joanna M. Black, Robert F. Hess, Jeremy R. Cooperstock, Long To, Benjamin Thompson.
Institutions: University of Auckland, McGill University , McGill University .
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%1-3, the majority of cases being monocular2. Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself4,5 . Amblyopia is characterized by both monocular and binocular deficits6,7 which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions8. Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia9,10 . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device11,12 . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control13,14 and patient6,9,11 populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia12,15,16 . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device15.
Medicine, Issue 70, Ophthalmology, Neuroscience, Anatomy, Physiology, Amblyopia, suppression, visual cortex, binocular vision, plasticity, strabismus, anisometropia
Play Button
The use of Biofeedback in Clinical Virtual Reality: The INTREPID Project
Authors: Claudia Repetto, Alessandra Gorini, Cinzia Vigna, Davide Algeri, Federica Pallavicini, Giuseppe Riva.
Institutions: Istituto Auxologico Italiano, Università Cattolica del Sacro Cuore.
Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioral treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. The INTREPID project is aimed to implement a new instrument to treat anxiety-related disorders and to test its clinical efficacy in reducing anxiety-related symptoms. The innovation of this approach is the combination of virtual reality and biofeedback, so that the first one is directly modified by the output of the second one. In this way, the patient is made aware of his or her reactions through the modification of some features of the VR environment in real time. Using mental exercises the patient learns to control these physiological parameters and using the feedback provided by the virtual environment is able to gauge his or her success. The supplemental use of portable devices, such as PDA or smart-phones, allows the patient to perform at home, individually and autonomously, the same exercises experienced in therapist's office. The goal is to anchor the learned protocol in a real life context, so enhancing the patients' ability to deal with their symptoms. The expected result is a better and faster learning of relaxation techniques, and thus an increased effectiveness of the treatment if compared with traditional clinical protocols.
Neuroscience, Issue 33, virtual reality, biofeedback, generalized anxiety disorder, Intrepid, cybertherapy, cyberpsychology
Play Button
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Authors: Nicole C. Huff, David J. Zielinski, Matthew E. Fecteau, Rachael Brady, Kevin S. LaBar.
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data. In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2 . Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
Play Button
In situ Quantification of Pancreatic Beta-cell Mass in Mice
Authors: Abraham Kim, German Kilimnik, Manami Hara.
Institutions: University of Chicago.
Tracing changes of specific cell populations in health and disease is an important goal of biomedical research. The process of monitoring pancreatic beta-cell proliferation and islet growth is particularly challenging. We have developed a method to capture the distribution of beta-cells in the intact pancreas of transgenic mice with fluorescence-tagged beta-cells with a macro written for ImageJ ( Following pancreatic dissection and tissue clearing, the entire pancreas is captured as a virtual slice, after which the GFP-tagged beta-cells are examined. The analysis includes the quantification of total beta-cell area, islet number and size distribution with reference to specific parameters and locations for each islet and for small clusters of beta-cells. The entire distribution of islets can be plotted in three dimensions, and the information from the distribution on the size and shape of each islet allows a quantitative and qualitative comparison of changes in overall beta-cell area at a glance.
Cellular Biology, Issue 40, beta-cells, islets, mouse, pancreas
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.