JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
When do objects become landmarks? A VR study of the effect of task relevance on spatial memory.
We investigated how objects come to serve as landmarks in spatial memory, and more specifically how they form part of an allocentric cognitive map. Participants performing a virtual driving task incidentally learned the layout of a virtual town and locations of objects in that town. They were subsequently tested on their spatial and recognition memory for the objects. To assess whether the objects were encoded allocentrically we examined pointing consistency across tested viewpoints. In three experiments, we found that spatial memory for objects at navigationally relevant locations was more consistent across tested viewpoints, particularly when participants had more limited experience of the environment. When participants attention was focused on the appearance of objects, the navigational relevance effect was eliminated, whereas when their attention was focused on objects locations, this effect was enhanced, supporting the hypothesis that when objects are processed in the service of navigation, rather than merely being viewed as objects, they engage qualitatively distinct attentional systems and are incorporated into an allocentric spatial representation. The results are consistent with evidence from the neuroimaging literature that when objects are relevant to navigation, they not only engage the ventral "object processing stream", but also the dorsal stream and medial temporal lobe memory system classically associated with allocentric spatial memory.
Authors: Kelly A. Bennion, Katherine R. Mickley Steinmetz, Elizabeth A. Kensinger, Jessica D. Payne.
Published: 06-18-2014
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.
21 Related JoVE Articles!
Play Button
Assessment of Age-related Changes in Cognitive Functions Using EmoCogMeter, a Novel Tablet-computer Based Approach
Authors: Philipp Fuge, Simone Grimm, Anne Weigand, Yan Fan, Matti Gärtner, Melanie Feeser, Malek Bajbouj.
Institutions: Freie Universität Berlin, Charité Berlin, Freie Universität Berlin, Psychiatric University Hospital Zurich.
The main goal of this study was to assess the usability of a tablet-computer-based application (EmoCogMeter) in investigating the effects of age on cognitive functions across the lifespan in a sample of 378 healthy subjects (age range 18-89 years). Consistent with previous findings we found an age-related cognitive decline across a wide range of neuropsychological domains (memory, attention, executive functions), thereby proving the usability of our tablet-based application. Regardless of prior computer experience, subjects of all age groups were able to perform the tasks without instruction or feedback from an experimenter. Increased motivation and compliance proved to be beneficial for task performance, thereby potentially increasing the validity of the results. Our promising findings underline the great clinical and practical potential of a tablet-based application for detection and monitoring of cognitive dysfunction.
Behavior, Issue 84, Neuropsychological Testing, cognitive decline, age, tablet-computer, memory, attention, executive functions
Play Button
A Video Demonstration of Preserved Piloting by Scent Tracking but Impaired Dead Reckoning After Fimbria-Fornix Lesions in the Rat
Authors: Ian Q. Whishaw, Boguslaw P. Gorny.
Institutions: Canadian Centre for Behavioural Neuroscience, University of Lethbridge.
Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990; Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance." Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
Neuroscience, Issue 26, Dead reckoning, fimbria-fornix, hippocampus, odor tracking, path integration, spatial learning, spatial navigation, piloting, rat, Canadian Centre for Behavioural Neuroscience
Play Button
Barnes Maze Testing Strategies with Small and Large Rodent Models
Authors: Cheryl S. Rosenfeld, Sherry A. Ferguson.
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Play Button
Driving Simulation in the Clinic: Testing Visual Exploratory Behavior in Daily Life Activities in Patients with Visual Field Defects
Authors: Johanna Hamel, Antje Kraft, Sven Ohl, Sophie De Beukelaer, Heinrich J. Audebert, Stephan A. Brandt.
Institutions: Universitätsmedizin Charité, Universitätsmedizin Charité, Humboldt Universität zu Berlin.
Patients suffering from homonymous hemianopia after infarction of the posterior cerebral artery (PCA) report different degrees of constraint in daily life, despite similar visual deficits. We assume this could be due to variable development of compensatory strategies such as altered visual scanning behavior. Scanning compensatory therapy (SCT) is studied as part of the visual training after infarction next to vision restoration therapy. SCT consists of learning to make larger eye movements into the blind field enlarging the visual field of search, which has been proven to be the most useful strategy1, not only in natural search tasks but also in mastering daily life activities2. Nevertheless, in clinical routine it is difficult to identify individual levels and training effects of compensatory behavior, since it requires measurement of eye movements in a head unrestrained condition. Studies demonstrated that unrestrained head movements alter the visual exploratory behavior compared to a head-restrained laboratory condition3. Martin et al.4 and Hayhoe et al.5 showed that behavior demonstrated in a laboratory setting cannot be assigned easily to a natural condition. Hence, our goal was to develop a study set-up which uncovers different compensatory oculomotor strategies quickly in a realistic testing situation: Patients are tested in the clinical environment in a driving simulator. SILAB software (Wuerzburg Institute for Traffic Sciences GmbH (WIVW)) was used to program driving scenarios of varying complexity and recording the driver's performance. The software was combined with a head mounted infrared video pupil tracker, recording head- and eye-movements (EyeSeeCam, University of Munich Hospital, Clinical Neurosciences). The positioning of the patient in the driving simulator and the positioning, adjustment and calibration of the camera is demonstrated. Typical performances of a patient with and without compensatory strategy and a healthy control are illustrated in this pilot study. Different oculomotor behaviors (frequency and amplitude of eye- and head-movements) are evaluated very quickly during the drive itself by dynamic overlay pictures indicating where the subjects gaze is located on the screen, and by analyzing the data. Compensatory gaze behavior in a patient leads to a driving performance comparable to a healthy control, while the performance of a patient without compensatory behavior is significantly worse. The data of eye- and head-movement-behavior as well as driving performance are discussed with respect to different oculomotor strategies and in a broader context with respect to possible training effects throughout the testing session and implications on rehabilitation potential.
Medicine, Issue 67, Neuroscience, Physiology, Anatomy, Ophthalmology, compensatory oculomotor behavior, driving simulation, eye movements, homonymous hemianopia, stroke, visual field defects, visual field enlargement
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Use of an Eight-arm Radial Water Maze to Assess Working and Reference Memory Following Neonatal Brain Injury
Authors: Stephanie C. Penley, Cynthia M. Gaudet, Steven W. Threlkeld.
Institutions: Rhode Island College, Rhode Island College.
Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.
Behavior, Issue 82, working memory, reference memory, hypoxia-ischemia, radial arm maze, water maze
Play Button
Extracting Visual Evoked Potentials from EEG Data Recorded During fMRI-guided Transcranial Magnetic Stimulation
Authors: Boaz Sadeh, Galit Yovel.
Institutions: Tel-Aviv University, Tel-Aviv University.
Transcranial Magnetic Stimulation (TMS) is an effective method for establishing a causal link between a cortical area and cognitive/neurophysiological effects. Specifically, by creating a transient interference with the normal activity of a target region and measuring changes in an electrophysiological signal, we can establish a causal link between the stimulated brain area or network and the electrophysiological signal that we record. If target brain areas are functionally defined with prior fMRI scan, TMS could be used to link the fMRI activations with evoked potentials recorded. However, conducting such experiments presents significant technical challenges given the high amplitude artifacts introduced into the EEG signal by the magnetic pulse, and the difficulty to successfully target areas that were functionally defined by fMRI. Here we describe a methodology for combining these three common tools: TMS, EEG, and fMRI. We explain how to guide the stimulator's coil to the desired target area using anatomical or functional MRI data, how to record EEG during concurrent TMS, how to design an ERP study suitable for EEG-TMS combination and how to extract reliable ERP from the recorded data. We will provide representative results from a previously published study, in which fMRI-guided TMS was used concurrently with EEG to show that the face-selective N1 and the body-selective N1 component of the ERP are associated with distinct neural networks in extrastriate cortex. This method allows us to combine the high spatial resolution of fMRI with the high temporal resolution of TMS and EEG and therefore obtain a comprehensive understanding of the neural basis of various cognitive processes.
Neuroscience, Issue 87, Transcranial Magnetic Stimulation, Neuroimaging, Neuronavigation, Visual Perception, Evoked Potentials, Electroencephalography, Event-related potential, fMRI, Combined Neuroimaging Methods, Face perception, Body Perception
Play Button
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Authors: Noah S. Philip, S. Louisa Carpenter, Lawrence H. Sweet.
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD). Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions. Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Human Fear Conditioning Conducted in Full Immersion 3-Dimensional Virtual Reality
Authors: Nicole C. Huff, David J. Zielinski, Matthew E. Fecteau, Rachael Brady, Kevin S. LaBar.
Institutions: Duke University, Duke University.
Fear conditioning is a widely used paradigm in non-human animal research to investigate the neural mechanisms underlying fear and anxiety. A major challenge in conducting conditioning studies in humans is the ability to strongly manipulate or simulate the environmental contexts that are associated with conditioned emotional behaviors. In this regard, virtual reality (VR) technology is a promising tool. Yet, adapting this technology to meet experimental constraints requires special accommodations. Here we address the methodological issues involved when conducting fear conditioning in a fully immersive 6-sided VR environment and present fear conditioning data. In the real world, traumatic events occur in complex environments that are made up of many cues, engaging all of our sensory modalities. For example, cues that form the environmental configuration include not only visual elements, but aural, olfactory, and even tactile. In rodent studies of fear conditioning animals are fully immersed in a context that is rich with novel visual, tactile and olfactory cues. However, standard laboratory tests of fear conditioning in humans are typically conducted in a nondescript room in front of a flat or 2D computer screen and do not replicate the complexity of real world experiences. On the other hand, a major limitation of clinical studies aimed at reducing (extinguishing) fear and preventing relapse in anxiety disorders is that treatment occurs after participants have acquired a fear in an uncontrolled and largely unknown context. Thus the experimenters are left without information about the duration of exposure, the true nature of the stimulus, and associated background cues in the environment1. In the absence of this information it can be difficult to truly extinguish a fear that is both cue and context-dependent. Virtual reality environments address these issues by providing the complexity of the real world, and at the same time allowing experimenters to constrain fear conditioning and extinction parameters to yield empirical data that can suggest better treatment options and/or analyze mechanistic hypotheses. In order to test the hypothesis that fear conditioning may be richly encoded and context specific when conducted in a fully immersive environment, we developed distinct virtual reality 3-D contexts in which participants experienced fear conditioning to virtual snakes or spiders. Auditory cues co-occurred with the CS in order to further evoke orienting responses and a feeling of "presence" in subjects 2 . Skin conductance response served as the dependent measure of fear acquisition, memory retention and extinction.
JoVE Neuroscience, Issue 42, fear conditioning, virtual reality, human memory, skin conductance response, context learning
Play Button
Quantifying Cognitive Decrements Caused by Cranial Radiotherapy
Authors: Lori- Ann Christie, Munjal M. Acharya, Charles L. Limoli.
Institutions: University of California Irvine .
With the exception of survival, cognitive impairment stemming from the clinical management of cancer is a major factor dictating therapeutic outcome. For many patients afflicted with CNS and non-CNS malignancies, radiotherapy and chemotherapy offer the best options for disease control. These treatments however come at a cost, and nearly all cancer survivors (~11 million in the US alone as of 2006) incur some risk for developing cognitive dysfunction, with the most severe cases found in patients subjected to cranial radiotherapy (~200,000/yr) for the control of primary and metastatic brain tumors1. Particularly problematic are pediatric cases, whose long-term survival plagued with marked cognitive decrements results in significant socioeconomic burdens2. To date, there are still no satisfactory solutions to this significant clinical problem. We have addressed this serious health concern using transplanted stem cells to combat radiation-induced cognitive decline in athymic rats subjected to cranial irradiation3. Details of the stereotaxic irradiation and the in vitro culturing and transplantation of human neural stem cells (hNSCs) can be found in our companion paper (Acharya et al., JoVE reference). Following irradiation and transplantation surgery, rats are then assessed for changes in cognition, grafted cell survival and expression of differentiation-specific markers 1 and 4-months after irradiation. To critically evaluate the success or failure of any potential intervention designed to ameliorate radiation-induced cognitive sequelae, a rigorous series of quantitative cognitive tasks must be performed. To accomplish this, we subject our animals to a suite of cognitive testing paradigms including novel place recognition, water maze, elevated plus maze and fear conditioning, in order to quantify hippocampal and non-hippocampal learning and memory. We have demonstrated the utility of these tests for quantifying specific types of cognitive decrements in irradiated animals, and used them to show that animals engrafted with hNSCs exhibit significant improvements in cognitive function3. The cognitive benefits derived from engrafted human stem cells suggest that similar strategies may one day provide much needed clinical recourse to cancer survivors suffering from impaired cognition. Accordingly, we have provided written and visual documentation of the critical steps used in our cognitive testing paradigms to facilitate the translation of our promising results into the clinic.
Medicine, Issue 56, neuroscience, radiotherapy, cognitive dysfunction, hippocampus, novel place recognition, elevated plus maze, fear conditioning, water maze
Play Button
Brain Imaging Investigation of the Impairing Effect of Emotion on Cognition
Authors: Gloria Wong, Sanda Dolcos, Ekaterina Denkova, Rajendra Morey, Lihong Wang, Gregory McCarthy, Florin Dolcos.
Institutions: University of Alberta, University of Alberta, University of Illinois, Duke University , Duke University , VA Medical Center, Yale University, University of Illinois, University of Illinois.
Emotions can impact cognition by exerting both enhancing (e.g., better memory for emotional events) and impairing (e.g., increased emotional distractibility) effects (reviewed in 1). Complementing our recent protocol 2 describing a method that allows investigation of the neural correlates of the memory-enhancing effect of emotion (see also 1, 3-5), here we present a protocol that allows investigation of the neural correlates of the detrimental impact of emotion on cognition. The main feature of this method is that it allows identification of reciprocal modulations between activity in a ventral neural system, involved in 'hot' emotion processing (HotEmo system), and a dorsal system, involved in higher-level 'cold' cognitive/executive processing (ColdEx system), which are linked to cognitive performance and to individual variations in behavior (reviewed in 1). Since its initial introduction 6, this design has proven particularly versatile and influential in the elucidation of various aspects concerning the neural correlates of the detrimental impact of emotional distraction on cognition, with a focus on working memory (WM), and of coping with such distraction 7,11, in both healthy 8-11 and clinical participants 12-14.
Neuroscience, Issue 60, Emotion-Cognition Interaction, Cognitive/Emotional Interference, Task-Irrelevant Distraction, Neuroimaging, fMRI, MRI
Play Button
High Density Event-related Potential Data Acquisition in Cognitive Neuroscience
Authors: Scott D. Slotnick.
Institutions: Boston College.
Functional magnetic resonance imaging (fMRI) is currently the standard method of evaluating brain function in the field of Cognitive Neuroscience, in part because fMRI data acquisition and analysis techniques are readily available. Because fMRI has excellent spatial resolution but poor temporal resolution, this method can only be used to identify the spatial location of brain activity associated with a given cognitive process (and reveals virtually nothing about the time course of brain activity). By contrast, event-related potential (ERP) recording, a method that is used much less frequently than fMRI, has excellent temporal resolution and thus can track rapid temporal modulations in neural activity. Unfortunately, ERPs are under utilized in Cognitive Neuroscience because data acquisition techniques are not readily available and low density ERP recording has poor spatial resolution. In an effort to foster the increased use of ERPs in Cognitive Neuroscience, the present article details key techniques involved in high density ERP data acquisition. Critically, high density ERPs offer the promise of excellent temporal resolution and good spatial resolution (or excellent spatial resolution if coupled with fMRI), which is necessary to capture the spatial-temporal dynamics of human brain function.
Neuroscience, Issue 38, ERP, electrodes, methods, setup
Play Button
Brain Imaging Investigation of the Memory-Enhancing Effect of Emotion
Authors: Andrea Shafer, Alexandru Iordan, Roberto Cabeza, Florin Dolcos.
Institutions: University of Alberta, University of Illinois, Urbana-Champaign, Duke University, University of Illinois, Urbana-Champaign.
Emotional events tend to be better remembered than non-emotional events1,2. One goal of cognitive and affective neuroscientists is to understand the neural mechanisms underlying this enhancing effect of emotion on memory. A method that has proven particularly influential in the investigation of the memory-enhancing effect of emotion is the so-called subsequent memory paradigm (SMP). This method was originally used to investigate the neural correlates of non-emotional memories3, and more recently we and others also applied it successfully to studies of emotional memory (reviewed in4, 5-7). Here, we describe a protocol that allows investigation of the neural correlates of the memory-enhancing effect of emotion using the SMP in conjunction with event-related functional magnetic resonance imaging (fMRI). An important feature of the SMP is that it allows separation of brain activity specifically associated with memory from more general activity associated with perception. Moreover, in the context of investigating the impact of emotional stimuli, SMP allows identification of brain regions whose activity is susceptible to emotional modulation of both general/perceptual and memory-specific processing. This protocol can be used in healthy subjects8-15, as well as in clinical patients where there are alterations in the neural correlates of emotion perception and biases in remembering emotional events, such as those suffering from depression and post-traumatic stress disorder (PTSD)16, 17.
Neuroscience, Issue 51, Affect, Recognition, Recollection, Dm Effect, Neuroimaging
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.