Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990;
Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance."
Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
17 Related JoVE Articles!
A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision
Institutions: University of Pittsburgh, University of Pittsburgh.
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration.
preferred walking speed (PWS)
course speed (CS)
percentage preferred walking speed (PPWS)
Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Irrelevant Stimuli and Action Control: Analyzing the Influence of Ignored Stimuli via the Distractor-Response Binding Paradigm
Institutions: Trier University, Trier University.
Selection tasks in which simple stimuli (e.g.
letters) are presented and a target stimulus has to be selected against one or more distractor stimuli are frequently used in the research on human action control. One important question in these settings is how distractor stimuli, competing with the target stimulus for a response, influence actions. The distractor-response binding paradigm can be used to investigate this influence. It is particular useful to separately analyze response retrieval and distractor inhibition effects. Computer-based experiments are used to collect the data (reaction times and error rates). In a number of sequentially presented pairs of stimulus arrays (prime-probe design), participants respond to targets while ignoring distractor stimuli. Importantly, the factors response relation in the arrays of each pair (repetition vs. change) and distractor relation (repetition vs. change) are varied orthogonally. The repetition of the same distractor then has a different effect depending on response relation (repetition vs. change) between arrays. This result pattern can be explained by response retrieval due to distractor repetition. In addition, distractor inhibition effects are indicated by a general advantage due to distractor repetition. The described paradigm has proven useful to determine relevant parameters for response retrieval effects on human action.
Behavior, Issue 87, stimulus-response binding, distractor-response binding, response retrieval, distractor inhibition, event file, action control, selection task
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Extracting Visual Evoked Potentials from EEG Data Recorded During fMRI-guided Transcranial Magnetic Stimulation
Institutions: Tel-Aviv University, Tel-Aviv University.
Transcranial Magnetic Stimulation (TMS) is an effective method for establishing a causal link between a cortical area and cognitive/neurophysiological effects. Specifically, by creating a transient interference with the normal activity of a target region and measuring changes in an electrophysiological signal, we can establish a causal link between the stimulated brain area or network and the electrophysiological signal that we record. If target brain areas are functionally defined with prior fMRI scan, TMS could be used to link the fMRI activations with evoked potentials recorded. However, conducting such experiments presents significant technical challenges given the high amplitude artifacts introduced into the EEG signal by the magnetic pulse, and the difficulty to successfully target areas that were functionally defined by fMRI. Here we describe a methodology for combining these three common tools: TMS, EEG, and fMRI. We explain how to guide the stimulator's coil to the desired target area using anatomical or functional MRI data, how to record EEG during concurrent TMS, how to design an ERP study suitable for EEG-TMS combination and how to extract reliable ERP from the recorded data. We will provide representative results from a previously published study, in which fMRI-guided TMS was used concurrently with EEG to show that the face-selective N1 and the body-selective N1 component of the ERP are associated with distinct neural networks in extrastriate cortex. This method allows us to combine the high spatial resolution of fMRI with the high temporal resolution of TMS and EEG and therefore obtain a comprehensive understanding of the neural basis of various cognitive processes.
Neuroscience, Issue 87, Transcranial Magnetic Stimulation, Neuroimaging, Neuronavigation, Visual Perception, Evoked Potentials, Electroencephalography, Event-related potential, fMRI, Combined Neuroimaging Methods, Face perception, Body Perception
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g.
, food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera
) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides.
We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
The FlyBar: Administering Alcohol to Flies
Institutions: Florida State University, University of Houston.
Fruit flies (Drosophila melanogaster
) are an established model for both alcohol research and circadian biology. Recently, we showed that the circadian clock modulates alcohol sensitivity, but not the formation of tolerance. Here, we describe our protocol in detail. Alcohol is administered to the flies using the FlyBar. In this setup, saturated alcohol vapor is mixed with humidified air in set proportions, and administered to the flies in four tubes simultaneously. Flies are reared under standardized conditions in order to minimize variation between the replicates. Three-day old flies of different genotypes or treatments are used for the experiments, preferably by matching flies of two different time points (e.g.
, CT 5 and CT 17) making direct comparisons possible. During the experiment, flies are exposed for 1 hr to the pre-determined percentage of alcohol vapor and the number of flies that exhibit the Loss of Righting reflex (LoRR) or sedation are counted every 5 min. The data can be analyzed using three different statistical approaches. The first is to determine the time at which 50% of the flies have lost their righting reflex and use an Analysis of the Variance (ANOVA) to determine whether significant differences exist between time points. The second is to determine the percentage flies that show LoRR after a specified number of minutes, followed by an ANOVA analysis. The last method is to analyze the whole times series using multivariate statistics. The protocol can also be used for non-circadian experiments or comparisons between genotypes.
Neuroscience, Issue 87, neuroscience, alcohol sensitivity, Drosophila, Circadian, sedation, biological rhythms, undergraduate research
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Remote Magnetic Navigation for Accurate, Real-time Catheter Positioning and Ablation in Cardiac Electrophysiology Procedures
Institutions: La Paz University Hospital, Magnetecs Corp., Geffen School of Medicine at UCLA Los Angeles.
New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate.
Medicine, Issue 74, Anatomy, Physiology, Biomedical Engineering, Surgery, Cardiology, catheter ablation, remote navigation, magnetic, robotic, catheter, positioning, electrophysiology, clinical techniques
Using MazeSuite and Functional Near Infrared Spectroscopy to Study Learning in Spatial Navigation
Institutions: Drexel University, Drexel University.
MazeSuite is a complete toolset to prepare, present and analyze navigational and spatial experiments1
. MazeSuite can be used to design and edit adapted virtual 3D environments, track a participants' behavioral performance within the virtual environment and synchronize with external devices for physiological and neuroimaging measures, including electroencephalogram and eye tracking.
Functional near-infrared spectroscopy (fNIR) is an optical brain imaging technique that enables continuous, noninvasive, and portable monitoring of changes in cerebral blood oxygenation related to human brain functions2-7
. Over the last decade fNIR is used to effectively monitor cognitive tasks such as attention, working memory and problem solving7-11
. fNIR can be implemented in the form of a wearable and minimally intrusive device; it has the capacity to monitor brain activity in ecologically valid environments.
Cognitive functions assessed through task performance involve patterns of brain activation of the prefrontal cortex (PFC) that vary from the initial novel task performance, after practice and during retention12
. Using positron emission tomography (PET), Van Horn and colleagues found that regional cerebral blood flow was activated in the right frontal lobe during the encoding (i.e., initial naïve performance) of spatial navigation of virtual mazes while there was little to no activation of the frontal regions after practice and during retention tests. Furthermore, the effects of contextual interference, a learning phenomenon related to organization of practice, are evident when individuals acquire multiple tasks under different practice schedules13,14
. High contextual interference (random practice schedule) is created when the tasks to be learned are presented in a non-sequential, unpredictable order. Low contextual interference (blocked practice schedule) is created when the tasks to be learned are presented in a predictable order.
Our goal here is twofold: first to illustrate the experimental protocol design process and the use of MazeSuite, and second, to demonstrate the setup and deployment of the fNIR brain activity monitoring system using Cognitive Optical Brain Imaging (COBI) Studio software15
. To illustrate our goals, a subsample from a study is reported to show the use of both MazeSuite and COBI Studio in a single experiment. The study involves the assessment of cognitive activity of the PFC during the acquisition and learning of computer maze tasks for blocked and random orders. Two right-handed adults (one male, one female) performed 315 acquisition, 30 retention and 20 transfer trials across four days. Design, implementation, data acquisition and analysis phases of the study were explained with the intention to provide a guideline for future studies.
Neuroscience, Issue 56, Cognitive, optical, brain, imaging, functional near-infrared spectroscopy, fNIR, spatial, navigation, software
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1
. Humans can do path integration based exclusively on visual 2-3
, auditory 4
, or inertial cues 5
. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7
. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5
. Movement through physical space therefore does not seem to be accurately represented by the brain.
Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3
for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9
with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed.
16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2
peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen.
Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Low-stress Route Learning Using the Lashley III Maze in Mice
Institutions: Pennsylvania State University, Pennsylvania State University, Pennsylvania State University, Pennsylvania State University, University of California, Los Angeles, University of California, Los Angeles.
Many behavior tests designed to assess learning and memory in rodents, particularly mice, rely on visual cues, food and/or water deprivation, or other aversive stimuli to motivate task acquisition. As animals age, sensory modalities deteriorate. For example, many strains of mice develop hearing deficits or cataracts. Changes in the sensory systems required to guide mice during task acquisition present potential confounds in interpreting learning changes in aging animals. Moreover, the use of aversive stimuli to motivate animals to learn tasks is potentially confounding when comparing mice with differential sensitivities to stress. To minimize these types of confounding effects, we have implemented a modified version of the Lashley III maze. This maze relies on route learning, whereby mice learn to navigate a maze via repeated exposure under low stress conditions, e.g. dark phase, no food/water deprivation, until they navigate a path from the start location to a pseudo-home cage with 0 or 1 error(s) on two consecutive trials. We classify this as a low-stress behavior test because it does not rely on aversive stimuli to encourage exploration of the maze and learning of the task. The apparatus consists of a modular start box, a 4-arm maze body, and a goal box. At the end of the goal box is a pseudo-home cage that contains bedding similar to that found in the animal’s home cage and is specific to each animal for the duration of maze testing. It has been demonstrated previously that this pseudo-home cage provides sufficient reward to motivate mice to learn to navigate the maze1
. Here, we present the visualization of the Lashley III maze procedure in the context of evaluating age-related differences in learning and memory in mice along with a comparison of learning behavior in two different background strains of mice. We hope that other investigators interested in evaluating the effects of aging or stress vulnerability in mice will consider this maze an attractive alternative to behavioral tests that involve more stressful learning tasks and/or visual cues.
Neuroscience, Issue 39, mouse, behavior testing, learning, memory, neuroscience, phenotyping, aging
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g.
, learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data.
In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1
. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses.
In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2
. Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning