Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
24 Related JoVE Articles!
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Upright Imaging of Drosophila Embryos
Institutions: Case Western Reserve University, Case Western Reserve University.
Several well-known morphogenetic gradients and cellular movements occur along the dorsal/ventral axis of the Drosophila
embryo. However, the current techniques used to view such processes are somewhat limited. The following protocol describes a new technique for mounting fixed and labeled Drosophila
embryos for coronal viewing with confocal imaging. This method consists of embedding embryos between two layers of glycerin jelly mounting media, and imaging jelly strips positioned upright. The first step for sandwiching the embryos is to make a thin bedding of glycerin jelly on a slide. Next, embryos are carefully aligned on this surface and covered with a second layer of jelly. After the second layer is solidified, strips of jelly are cut and flipped upright for imaging. Alternatives are described for visualizing the embryos depending upon the type of microscope stand to be used. Since all cells along the dorsal-ventral axis are imaged within a single confocal Z-plane, our method allows precise measurement and comparison of fluorescent signals without photobleaching or light scattering common to 3D reconstructions of longitudinally mounted embryos.
Developmental Biology, Issue 43, Drosophila, Confocal imaging, Multiplex in situ hybridization, Embryo, Insect Development
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1
. Humans can do path integration based exclusively on visual 2-3
, auditory 4
, or inertial cues 5
. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7
. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5
. Movement through physical space therefore does not seem to be accurately represented by the brain.
Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3
for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9
with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed.
16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2
peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen.
Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g.
, learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
Using MazeSuite and Functional Near Infrared Spectroscopy to Study Learning in Spatial Navigation
Institutions: Drexel University, Drexel University.
MazeSuite is a complete toolset to prepare, present and analyze navigational and spatial experiments1
. MazeSuite can be used to design and edit adapted virtual 3D environments, track a participants' behavioral performance within the virtual environment and synchronize with external devices for physiological and neuroimaging measures, including electroencephalogram and eye tracking.
Functional near-infrared spectroscopy (fNIR) is an optical brain imaging technique that enables continuous, noninvasive, and portable monitoring of changes in cerebral blood oxygenation related to human brain functions2-7
. Over the last decade fNIR is used to effectively monitor cognitive tasks such as attention, working memory and problem solving7-11
. fNIR can be implemented in the form of a wearable and minimally intrusive device; it has the capacity to monitor brain activity in ecologically valid environments.
Cognitive functions assessed through task performance involve patterns of brain activation of the prefrontal cortex (PFC) that vary from the initial novel task performance, after practice and during retention12
. Using positron emission tomography (PET), Van Horn and colleagues found that regional cerebral blood flow was activated in the right frontal lobe during the encoding (i.e., initial naïve performance) of spatial navigation of virtual mazes while there was little to no activation of the frontal regions after practice and during retention tests. Furthermore, the effects of contextual interference, a learning phenomenon related to organization of practice, are evident when individuals acquire multiple tasks under different practice schedules13,14
. High contextual interference (random practice schedule) is created when the tasks to be learned are presented in a non-sequential, unpredictable order. Low contextual interference (blocked practice schedule) is created when the tasks to be learned are presented in a predictable order.
Our goal here is twofold: first to illustrate the experimental protocol design process and the use of MazeSuite, and second, to demonstrate the setup and deployment of the fNIR brain activity monitoring system using Cognitive Optical Brain Imaging (COBI) Studio software15
. To illustrate our goals, a subsample from a study is reported to show the use of both MazeSuite and COBI Studio in a single experiment. The study involves the assessment of cognitive activity of the PFC during the acquisition and learning of computer maze tasks for blocked and random orders. Two right-handed adults (one male, one female) performed 315 acquisition, 30 retention and 20 transfer trials across four days. Design, implementation, data acquisition and analysis phases of the study were explained with the intention to provide a guideline for future studies.
Neuroscience, Issue 56, Cognitive, optical, brain, imaging, functional near-infrared spectroscopy, fNIR, spatial, navigation, software
Two-photon Calcium Imaging in Mice Navigating a Virtual Reality Environment
Institutions: Friedrich Miescher Institute for Biomedical Research, Max Planck Institute of Neurobiology, ETH Zurich.
In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior1-6
. Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.
Behavior, Issue 84, Two-photon imaging, Virtual Reality, mouse behavior, adeno-associated virus, genetically encoded calcium indicators
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Acquisition of High-Quality Digital Video of Drosophila Larval and Adult Behaviors from a Lateral Perspective
Institutions: Willamette University.
is a powerful experimental model system for studying the function of the nervous system. Gene mutations that cause dysfunction of the nervous system often produce viable larvae and adults that have locomotion defective phenotypes that are difficult to adequately describe with text or completely represent with a single photographic image. Current modes of scientific publishing, however, support the submission of digital video media as supplemental material to accompany a manuscript. Here we describe a simple and widely accessible microscopy technique for acquiring high-quality digital video of both Drosophila
larval and adult phenotypes from a lateral perspective. Video of larval and adult locomotion from a side-view is advantageous because it allows the observation and analysis of subtle distinctions and variations in aberrant locomotive behaviors. We have successfully used the technique to visualize and quantify aberrant crawling behaviors in third instar larvae, in addition to adult mutant phenotypes and behaviors including grooming.
Neuroscience, Issue 92, Drosophila, behavior, coordination, crawling, locomotion, nervous system, neurodegeneration, larva
Extracting Visual Evoked Potentials from EEG Data Recorded During fMRI-guided Transcranial Magnetic Stimulation
Institutions: Tel-Aviv University, Tel-Aviv University.
Transcranial Magnetic Stimulation (TMS) is an effective method for establishing a causal link between a cortical area and cognitive/neurophysiological effects. Specifically, by creating a transient interference with the normal activity of a target region and measuring changes in an electrophysiological signal, we can establish a causal link between the stimulated brain area or network and the electrophysiological signal that we record. If target brain areas are functionally defined with prior fMRI scan, TMS could be used to link the fMRI activations with evoked potentials recorded. However, conducting such experiments presents significant technical challenges given the high amplitude artifacts introduced into the EEG signal by the magnetic pulse, and the difficulty to successfully target areas that were functionally defined by fMRI. Here we describe a methodology for combining these three common tools: TMS, EEG, and fMRI. We explain how to guide the stimulator's coil to the desired target area using anatomical or functional MRI data, how to record EEG during concurrent TMS, how to design an ERP study suitable for EEG-TMS combination and how to extract reliable ERP from the recorded data. We will provide representative results from a previously published study, in which fMRI-guided TMS was used concurrently with EEG to show that the face-selective N1 and the body-selective N1 component of the ERP are associated with distinct neural networks in extrastriate cortex. This method allows us to combine the high spatial resolution of fMRI with the high temporal resolution of TMS and EEG and therefore obtain a comprehensive understanding of the neural basis of various cognitive processes.
Neuroscience, Issue 87, Transcranial Magnetic Stimulation, Neuroimaging, Neuronavigation, Visual Perception, Evoked Potentials, Electroencephalography, Event-related potential, fMRI, Combined Neuroimaging Methods, Face perception, Body Perception
A Video Demonstration of Preserved Piloting by Scent Tracking but Impaired Dead Reckoning After Fimbria-Fornix Lesions in the Rat
Institutions: Canadian Centre for Behavioural Neuroscience, University of Lethbridge.
Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990;
Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance."
Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
Neuroscience, Issue 26, Dead reckoning, fimbria-fornix, hippocampus, odor tracking, path integration, spatial learning, spatial navigation, piloting, rat, Canadian Centre for Behavioural Neuroscience
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD).
Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g.
, working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions.
Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Like many aquatic animals, zebrafish (Danio rerio
) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc
. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
A Comprehensive Protocol for Manual Segmentation of the Medial Temporal Lobe Structures
Institutions: University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign.
The present paper describes a comprehensive protocol for manual tracing of the set of brain regions comprising the medial temporal lobe (MTL): amygdala, hippocampus, and the associated parahippocampal regions (perirhinal, entorhinal, and parahippocampal proper). Unlike most other tracing protocols available, typically focusing on certain MTL areas (e.g.
, amygdala and/or hippocampus), the integrative perspective adopted by the present tracing guidelines allows for clear localization of all MTL subregions. By integrating information from a variety of sources, including extant tracing protocols separately targeting various MTL structures, histological reports, and brain atlases, and with the complement of illustrative visual materials, the present protocol provides an accurate, intuitive, and convenient guide for understanding the MTL anatomy. The need for such tracing guidelines is also emphasized by illustrating possible differences between automatic and manual segmentation protocols. This knowledge can be applied toward research involving not only structural MRI investigations but also structural-functional colocalization and fMRI signal extraction from anatomically defined ROIs, in healthy and clinical groups alike.
Neuroscience, Issue 89, Anatomy, Segmentation, Medial Temporal Lobe, MRI, Manual Tracing, Amygdala, Hippocampus, Perirhinal Cortex, Entorhinal Cortex, Parahippocampal Cortex
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Brain Imaging Investigation of the Memory-Enhancing Effect of Emotion
Institutions: University of Alberta, University of Illinois, Urbana-Champaign, Duke University, University of Illinois, Urbana-Champaign.
Emotional events tend to be better remembered than non-emotional events1,2
. One goal of cognitive and affective neuroscientists is
to understand the neural mechanisms underlying this enhancing effect of emotion on memory. A method that has proven particularly influential in the
investigation of the memory-enhancing effect of emotion is the so-called subsequent memory paradigm (SMP). This method was originally used to investigate the
neural correlates of non-emotional memories3
, and more recently we and others also applied it successfully to studies of emotional memory (reviewed in4, 5-7
Here, we describe a protocol that allows investigation of the neural correlates of the memory-enhancing effect of emotion using the SMP in conjunction with
event-related functional magnetic resonance imaging (fMRI). An important feature of the SMP is that it allows separation of brain activity specifically
associated with memory from more general activity associated with perception. Moreover, in the context of investigating the impact of emotional stimuli,
SMP allows identification of brain regions whose activity is susceptible to emotional modulation of both general/perceptual and memory-specific processing.
This protocol can be used in healthy subjects8-15
, as well as in clinical patients where there are alterations in the neural correlates of emotion perception
and biases in remembering emotional events, such as those suffering from depression and post-traumatic stress disorder (PTSD)16, 17
Neuroscience, Issue 51, Affect, Recognition, Recollection, Dm Effect, Neuroimaging
Brain Imaging Investigation of the Neural Correlates of Observing Virtual Social Interactions
Institutions: University of Alberta, University of Illinois, University of Alberta, University of Alberta, University of Alberta, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
The ability to gauge social interactions is crucial in the assessment of others’ intentions. Factors such as facial expressions and body language affect our decisions in personal and professional life alike 1
. These "friend or foe
" judgements are often based on first impressions, which in turn may affect our decisions to "approach or avoid
". Previous studies investigating the neural correlates of social cognition tended to use static facial stimuli 2
. Here, we illustrate an experimental design in which whole-body animated characters were used in conjunction with functional magnetic resonance imaging (fMRI) recordings. Fifteen participants were presented with short movie-clips of guest-host interactions in a business setting, while fMRI data were recorded; at the end of each movie, participants also provided ratings of the host behaviour. This design mimics more closely real-life situations, and hence may contribute to better understanding of the neural mechanisms of social interactions in healthy behaviour, and to gaining insight into possible causes of deficits in social behaviour in such clinical conditions as social anxiety and autism 3
Neuroscience, Issue 53, Social Perception, Social Knowledge, Social Cognition Network, Non-Verbal Communication, Decision-Making, Event-Related fMRI
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data.
In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1
. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses.
In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2
. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
BioMEMS and Cellular Biology: Perspectives and Applications
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
Using Visual and Narrative Methods to Achieve Fair Process in Clinical Care
Institutions: Brandeis University, Brandeis University.
The Institute of Medicine has targeted patient-centeredness as an important area of quality improvement. A major dimension of patient-centeredness is respect for patient's values, preferences, and expressed needs. Yet specific approaches to gaining this understanding and translating it to quality care in the clinical setting are lacking. From a patient perspective quality is not a simple concept but is best understood in terms of five dimensions: technical outcomes; decision-making efficiency; amenities and convenience; information and emotional support; and overall patient satisfaction. Failure to consider quality from this five-pronged perspective results in a focus on medical outcomes, without considering the processes central to quality from the patient's perspective and vital to achieving good outcomes. In this paper, we argue for applying the concept of fair process in clinical settings. Fair process involves using a collaborative approach to exploring diagnostic issues and treatments with patients, explaining the rationale for decisions, setting expectations about roles and responsibilities, and implementing a core plan and ongoing evaluation. Fair process opens the door to bringing patient expertise into the clinical setting and the work of developing health care goals and strategies. This paper provides a step by step illustration of an innovative visual approach, called photovoice or photo-elicitation, to achieve fair process in clinical work with acquired brain injury survivors and others living with chronic health conditions. Applying this visual tool and methodology in the clinical setting will enhance patient-provider communication; engage patients as partners in identifying challenges, strengths, goals, and strategies; and support evaluation of progress over time. Asking patients to bring visuals of their lives into the clinical interaction can help to illuminate gaps in clinical knowledge, forge better therapeutic relationships with patients living with chronic conditions such as brain injury, and identify patient-centered goals and possibilities for healing. The process illustrated here can be used by clinicians, (primary care physicians, rehabilitation therapists, neurologists, neuropsychologists, psychologists, and others) working with people living with chronic conditions such as acquired brain injury, mental illness, physical disabilities, HIV/AIDS, substance abuse, or post-traumatic stress, and by leaders of support groups for the types of patients described above and their family members or caregivers.
Medicine, Issue 48, person-centered care, participatory visual methods, photovoice, photo-elicitation, narrative medicine, acquired brain injury, disability, rehabilitation, palliative care