JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
The contribution of head movement to the externalization and internalization of sounds.
PUBLISHED: 01-01-2013
When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization.
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Published: 06-14-2014
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
21 Related JoVE Articles!
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Play Button
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Authors: James J. Jun, André Longtin, Leonard Maler.
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Play Button
Detection of Fluorescent Nanoparticle Interactions with Primary Immune Cell Subpopulations by Flow Cytometry
Authors: Olimpia Gamucci, Alice Bertero, Maria Ada Malvindi, Stefania Sabella, Pier Paolo Pompa, Barbara Mazzolai, Giuseppe Bardi.
Institutions: Istituto Italiano di Tecnologia, University of Pisa, Istituto Italiano di Tecnologia.
Engineered nanoparticles are endowed with very promising properties for therapeutic and diagnostic purposes. This work describes a fast and reliable method of analysis by flow cytometry to study nanoparticle interaction with immune cells. Primary immune cells can be easily purified from human or mouse tissues by antibody-mediated magnetic isolation. In the first instance, the different cell populations running in a flow cytometer can be distinguished by the forward-scattered light (FSC), which is proportional to cell size, and the side-scattered light (SSC), related to cell internal complexity. Furthermore, fluorescently labeled antibodies against specific cell surface receptors permit the identification of several subpopulations within the same sample. Often, all these features vary when cells are boosted by external stimuli that change their physiological and morphological state. Here, 50 nm FITC-SiO2 nanoparticles are used as a model to identify the internalization of nanostructured materials in human blood immune cells. The cell fluorescence and side-scattered light increase after incubation with nanoparticles allowed us to define time and concentration dependence of nanoparticle-cell interaction. Moreover, such protocol can be extended to investigate Rhodamine-SiO2 nanoparticle interaction with primary microglia, the central nervous system resident immune cells, isolated from mutant mice that specifically express the Green Fluorescent Protein (GFP) in the monocyte/macrophage lineage. Finally, flow cytometry data related to nanoparticle internalization into the cells have been confirmed by confocal microscopy.
Immunology, Issue 85, Flow cytometry, blood leukocytes, microglia, Nanoparticles, internalization, Fluorescence, cell purification
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Play Button
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Authors: Mikhail Kislin, Ekaterina Mugantseva, Dmitry Molotkov, Natalia Kulesskaya, Stanislav Khirug, Ilya Kirilkin, Evgeny Pryazhnikov, Julia Kolikova, Dmytro Toptunov, Mikhail Yuryev, Rashid Giniatullin, Vootele Voikar, Claudio Rivera, Heikki Rauvala, Leonard Khiroug.
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g., learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Play Button
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Authors: Daniel E. Bradford, Katherine P. Magruder, Rachel A. Korhumel, John J. Curtin.
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
Play Button
A Lightweight, Headphones-based System for Manipulating Auditory Feedback in Songbirds
Authors: Lukas A. Hoffmann, Conor W. Kelly, David A. Nicholson, Samuel J. Sober.
Institutions: Emory University, Emory University, Emory University.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output. Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13. The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.
Neuroscience, Issue 69, Anatomy, Physiology, Zoology, Behavior, Songbird, psychophysics, auditory feedback, biology, sensorimotor learning
Play Button
A Video Demonstration of Preserved Piloting by Scent Tracking but Impaired Dead Reckoning After Fimbria-Fornix Lesions in the Rat
Authors: Ian Q. Whishaw, Boguslaw P. Gorny.
Institutions: Canadian Centre for Behavioural Neuroscience, University of Lethbridge.
Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990; Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance." Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
Neuroscience, Issue 26, Dead reckoning, fimbria-fornix, hippocampus, odor tracking, path integration, spatial learning, spatial navigation, piloting, rat, Canadian Centre for Behavioural Neuroscience
Play Button
Using an EEG-Based Brain-Computer Interface for Virtual Cursor Movement with BCI2000
Authors: J. Adam Wilson, Gerwin Schalk, Léo M. Walton, Justin C. Williams.
Institutions: University of Wisconsin-Madison, New York State Dept. of Health.
A brain-computer interface (BCI) functions by translating a neural signal, such as the electroencephalogram (EEG), into a signal that can be used to control a computer or other device. The amplitude of the EEG signals in selected frequency bins are measured and translated into a device command, in this case the horizontal and vertical velocity of a computer cursor. First, the EEG electrodes are applied to the user s scalp using a cap to record brain activity. Next, a calibration procedure is used to find the EEG electrodes and features that the user will learn to voluntarily modulate to use the BCI. In humans, the power in the mu (8-12 Hz) and beta (18-28 Hz) frequency bands decrease in amplitude during a real or imagined movement. These changes can be detected in the EEG in real-time, and used to control a BCI ([1],[2]). Therefore, during a screening test, the user is asked to make several different imagined movements with their hands and feet to determine the unique EEG features that change with the imagined movements. The results from this calibration will show the best channels to use, which are configured so that amplitude changes in the mu and beta frequency bands move the cursor either horizontally or vertically. In this experiment, the general purpose BCI system BCI2000 is used to control signal acquisition, signal processing, and feedback to the user [3].
Neuroscience, Issue 29, BCI, EEG, brain-computer interface, BCI2000
Play Button
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Authors: Michael Barnett-Cowan, Tobias Meilinger, Manuel Vidal, Harald Teufel, Heinrich H. Bülthoff.
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1. Humans can do path integration based exclusively on visual 2-3, auditory 4, or inertial cues 5. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5. Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3 for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9 with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2 peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Play Button
Behavioral Determination of Stimulus Pair Discrimination of Auditory Acoustic and Electrical Stimuli Using a Classical Conditioning and Heart-rate Approach
Authors: Simeon J. Morgan, Antonio G. Paolini.
Institutions: La Trobe University.
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants1-3 and auditory midbrain implants4,5. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices6,7. Several techniques such as reward-based operant conditioning6-8, conditioned avoidance9-11, or classical fear conditioning12 have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
Neuroscience, Issue 64, Physiology, auditory, hearing, brainstem, stimulation, rat, abi
Play Button
Three Dimensional Vestibular Ocular Reflex Testing Using a Six Degrees of Freedom Motion Platform
Authors: Joyce Dits, Mark M.J. Houben, Johannes van der Steen.
Institutions: Erasmus MC, TNO Human Factors.
The vestibular organ is a sensor that measures angular and linear accelerations with six degrees of freedom (6DF). Complete or partial defects in the vestibular organ results in mild to severe equilibrium problems, such as vertigo, dizziness, oscillopsia, gait unsteadiness nausea and/or vomiting. A good and frequently used measure to quantify gaze stabilization is the gain, which is defined as the magnitude of compensatory eye movements with respect to imposed head movements. To test vestibular function more fully one has to realize that 3D VOR ideally generates compensatory ocular rotations not only with a magnitude (gain) equal and opposite to the head rotation but also about an axis that is co-linear with the head rotation axis (alignment). Abnormal vestibular function thus results in changes in gain and changes in alignment of the 3D VOR response. Here we describe a method to measure 3D VOR using whole body rotation on a 6DF motion platform. Although the method also allows testing translation VOR responses 1, we limit ourselves to a discussion of the method to measure 3D angular VOR. In addition, we restrict ourselves here to description of data collected in healthy subjects in response to angular sinusoidal and impulse stimulation. Subjects are sitting upright and receive whole-body small amplitude sinusoidal and constant acceleration impulses. Sinusoidal stimuli (f = 1 Hz, A = 4°) were delivered about the vertical axis and about axes in the horizontal plane varying between roll and pitch at increments of 22.5° in azimuth. Impulses were delivered in yaw, roll and pitch and in the vertical canal planes. Eye movements were measured using the scleral search coil technique 2. Search coil signals were sampled at a frequency of 1 kHz. The input-output ratio (gain) and misalignment (co-linearity) of the 3D VOR were calculated from the eye coil signals 3. Gain and co-linearity of 3D VOR depended on the orientation of the stimulus axis. Systematic deviations were found in particular during horizontal axis stimulation. In the light the eye rotation axis was properly aligned with the stimulus axis at orientations 0° and 90° azimuth, but gradually deviated more and more towards 45° azimuth. The systematic deviations in misalignment for intermediate axes can be explained by a low gain for torsion (X-axis or roll-axis rotation) and a high gain for vertical eye movements (Y-axis or pitch-axis rotation (see Figure 2). Because intermediate axis stimulation leads a compensatory response based on vector summation of the individual eye rotation components, the net response axis will deviate because the gain for X- and Y-axis are different. In darkness the gain of all eye rotation components had lower values. The result was that the misalignment in darkness and for impulses had different peaks and troughs than in the light: its minimum value was reached for pitch axis stimulation and its maximum for roll axis stimulation. Case Presentation Nine subjects participated in the experiment. All subjects gave their informed consent. The experimental procedure was approved by the Medical Ethics Committee of Erasmus University Medical Center and adhered to the Declaration of Helsinki for research involving human subjects. Six subjects served as controls. Three subjects had a unilateral vestibular impairment due to a vestibular schwannoma. The age of control subjects (six males and three females) ranged from 22 to 55 years. None of the controls had visual or vestibular complaints due to neurological, cardio vascular and ophthalmic disorders. The age of the patients with schwannoma varied between 44 and 64 years (two males and one female). All schwannoma subjects were under medical surveillance and/or had received treatment by a multidisciplinary team consisting of an othorhinolaryngologist and a neurosurgeon of the Erasmus University Medical Center. Tested patients all had a right side vestibular schwannoma and underwent a wait and watch policy (Table 1; subjects N1-N3) after being diagnosed with vestibular schwannoma. Their tumors had been stabile for over 8-10 years on magnetic resonance imaging.
Neurobiology, Issue 75, Neuroscience, Medicine, Anatomy, Physiology, Biomedical Engineering, Ophthalmology, vestibulo ocular reflex, eye movements, torsion, balance disorders, rotation translation, equilibrium, eye rotation, motion, body rotation, vestibular organ, clinical techniques
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
High Density Event-related Potential Data Acquisition in Cognitive Neuroscience
Authors: Scott D. Slotnick.
Institutions: Boston College.
Functional magnetic resonance imaging (fMRI) is currently the standard method of evaluating brain function in the field of Cognitive Neuroscience, in part because fMRI data acquisition and analysis techniques are readily available. Because fMRI has excellent spatial resolution but poor temporal resolution, this method can only be used to identify the spatial location of brain activity associated with a given cognitive process (and reveals virtually nothing about the time course of brain activity). By contrast, event-related potential (ERP) recording, a method that is used much less frequently than fMRI, has excellent temporal resolution and thus can track rapid temporal modulations in neural activity. Unfortunately, ERPs are under utilized in Cognitive Neuroscience because data acquisition techniques are not readily available and low density ERP recording has poor spatial resolution. In an effort to foster the increased use of ERPs in Cognitive Neuroscience, the present article details key techniques involved in high density ERP data acquisition. Critically, high density ERPs offer the promise of excellent temporal resolution and good spatial resolution (or excellent spatial resolution if coupled with fMRI), which is necessary to capture the spatial-temporal dynamics of human brain function.
Neuroscience, Issue 38, ERP, electrodes, methods, setup
Play Button
A Low Cost Setup for Behavioral Audiometry in Rodents
Authors: Konstantin Tziridis, Sönke Ahlf, Holger Schulze.
Institutions: University of Erlangen-Nuremberg.
In auditory animal research it is crucial to have precise information about basic hearing parameters of the animal subjects that are involved in the experiments. Such parameters may be physiological response characteristics of the auditory pathway, e.g. via brainstem audiometry (BERA). But these methods allow only indirect and uncertain extrapolations about the auditory percept that corresponds to these physiological parameters. To assess the perceptual level of hearing, behavioral methods have to be used. A potential problem with the use of behavioral methods for the description of perception in animal models is the fact that most of these methods involve some kind of learning paradigm before the subjects can be behaviorally tested, e.g. animals may have to learn to press a lever in response to a sound. As these learning paradigms change perception itself 1,2 they consequently will influence any result about perception obtained with these methods and therefore have to be interpreted with caution. Exceptions are paradigms that make use of reflex responses, because here no learning paradigms have to be carried out prior to perceptual testing. One such reflex response is the acoustic startle response (ASR) that can highly reproducibly be elicited with unexpected loud sounds in naïve animals. This ASR in turn can be influenced by preceding sounds depending on the perceptibility of this preceding stimulus: Sounds well above hearing threshold will completely inhibit the amplitude of the ASR; sounds close to threshold will only slightly inhibit the ASR. This phenomenon is called pre-pulse inhibition (PPI) 3,4, and the amount of PPI on the ASR gradually depends on the perceptibility of the pre-pulse. PPI of the ASR is therefore well suited to determine behavioral audiograms in naïve, non-trained animals, to determine hearing impairments or even to detect possible subjective tinnitus percepts in these animals. In this paper we demonstrate the use of this method in a rodent model (cf. also ref. 5), the Mongolian gerbil (Meriones unguiculatus), which is a well know model species for startle response research within the normal human hearing range (e.g. 6).
Neuroscience, Issue 68, Physiology, Anatomy, Medicine, otolaryngology, behavior, auditory startle response, pre-pulse inhibition, audiogram, tinnitus, hearing loss
Play Button
Functional Imaging with Reinforcement, Eyetracking, and Physiological Monitoring
Authors: Vincent Ferrera, Jack Grinband, Tobias Teichert, Franco Pestilli, Stephen Dashnaw, Joy Hirsch.
Institutions: Columbia University, Columbia University, Columbia University.
We use functional brain imaging (fMRI) to study neural circuits that underlie decision-making. To understand how outcomes affect decision processes, simple perceptual tasks are combined with appetitive and aversive reinforcement. However, the use of reinforcers such as juice and airpuffs can create challenges for fMRI. Reinforcer delivery can cause head movement, which creates artifacts in the fMRI signal. Reinforcement can also lead to changes in heart rate and respiration that are mediated by autonomic pathways. Changes in heart rate and respiration can directly affect the fMRI (BOLD) signal in the brain and can be confounded with signal changes that are due to neural activity. In this presentation, we demonstrate methods for administering reinforcers in a controlled manner, for stabilizing the head, and for measuring pulse and respiration.
Medicine, Issue 21, Neuroscience, Psychiatry, fMRI, Decision Making, Reward, Punishment, Pulse, Respiration, Eye Tracking, Psychology
Play Button
The Structure of Skilled Forelimb Reaching in the Rat: A Movement Rating Scale
Authors: Ian Q Whishaw, Paul Whishaw, Bogdan Gorny.
Institutions: University of Lethbridge.
Skilled reaching for food is an evolutionary ancient act and is displayed by many animal species, including those in the sister clades of rodents and primates. The video describes a test situation that allows filming of repeated acts of reaching for food by the rat that has been mildly food deprived. A rat is trained to reach through a slot in a holding box for food pellet that it grasps and then places in its mouth for eating. Reaching is accomplished in the main by proximally driven movements of the limb but distal limb movements are used for pronating the paw, grasping the food, and releasing the food into the mouth. Each reach is divided into at least 10 movements of the forelimb and the reaching act is facilitated by postural adjustments. Each of the movements is described and examples of the movements are given from a number of viewing perspectives. By rating each movement element on a 3-point scale, the reach can be quantified. A number of studies have demonstrated that the movement elements are altered by motor system damage, including damage to the motor cortex, basal ganglia, brainstem, and spinal cord. The movements are also altered in neurological conditions that can be modeled in the rat, including Parkinson's disease and Huntington's disease. Thus, the rating scale is useful for quantifying motor impairments and the effectiveness of neural restoration and rehabilitation. Because the reaching act for the rat is very similar to that displayed by humans and nonhuman primates, the scale can be used for comparative purposes. from a number of viewing perspectives. By rating each movement element on a 3-point scale, the reach can be quantified. A number of studies have demonstrated that the movement elements are altered by motor system damage, including damage to the motor cortex, basal ganglia, brainstem, and spinal cord. The movements are also altered in neurological conditions that can be modeled in the rat, including Parkinson's disease and Huntington's disease. Thus, the rating scale is useful for quantifying motor impairments and the effectiveness of neural restoration and rehabilitation. Experiments on animals were performed in accordance with the guidelines and regulations set forth by the University of Lethbridge Animal Care Committee in accordance with the regulations of the Canadian Council on Animal Care.
Neuroscience, Issue 18, rat skilled reaching, rat reaching scale, rat, rat movement element rating scale, reaching elements
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.