JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Rhesus monkeys see who they hear: spontaneous cross-modal memory for familiar conspecifics.
PUBLISHED: 04-24-2011
Rhesus monkeys gather much of their knowledge of the social world through visual input and may preferentially represent this knowledge in the visual modality. Recognition of familiar faces is clearly advantageous, and the flexibility and utility of primate social memory would be greatly enhanced if visual memories could be accessed cross-modally either by visual or auditory stimulation. Such cross-modal access to visual memory would facilitate flexible retrieval of the knowledge necessary for adaptive social behavior. We tested whether rhesus monkeys have cross-modal access to visual memory for familiar conspecifics using a delayed matching-to-sample procedure. Monkeys learned visual matching of video clips of familiar individuals to photographs of those individuals, and generalized performance to novel videos. In crossmodal probe trials, coo-calls were played during the memory interval. The calls were either from the monkey just seen in the sample video clip or from a different familiar monkey. Even though the monkeys were trained exclusively in visual matching, the calls influenced choice by causing an increase in the proportion of errors to the picture of the monkey whose voice was heard on incongruent trials. This result demonstrates spontaneous cross-modal recognition. It also shows that viewing videos of familiar monkeys activates naturally formed memories of real monkeys, validating the use of video stimuli in studies of social cognition in monkeys.
Authors: Raphael Bernier, Benjamin Aaronson, Anna Kresse.
Published: 04-09-2014
Electroencephalography (EEG) is an effective, efficient, and noninvasive method of assessing and recording brain activity. Given the excellent temporal resolution, EEG can be used to examine the neural response related to specific behaviors, states, or external stimuli. An example of this utility is the assessment of the mirror neuron system (MNS) in humans through the examination of the EEG mu rhythm. The EEG mu rhythm, oscillatory activity in the 8-12 Hz frequency range recorded from centrally located electrodes, is suppressed when an individual executes, or simply observes, goal directed actions. As such, it has been proposed to reflect activity of the MNS. It has been theorized that dysfunction in the mirror neuron system (MNS) plays a contributing role in the social deficits of autism spectrum disorder (ASD). The MNS can then be noninvasively examined in clinical populations by using EEG mu rhythm attenuation as an index for its activity. The described protocol provides an avenue to examine social cognitive functions theoretically linked to the MNS in individuals with typical and atypical development, such as ASD. 
22 Related JoVE Articles!
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Extraction and Analysis of Cortisol from Human and Monkey Hair
Authors: Jerrold Meyer, Melinda Novak, Amanda Hamel, Kendra Rosenberg.
Institutions: University of Massachusetts, Amherst, University of Massachusetts, Amherst.
The stress hormone cortisol (CORT) is slowly incorporated into the growing hair shaft of humans, nonhuman primates, and other mammals. We developed and validated a method for CORT extraction and analysis from rhesus monkey hair and subsequently adapted this method for use with human scalp hair. In contrast to CORT "point samples" obtained from plasma or saliva, hair CORT provides an integrated measure of hypothalamic-pituitary-adrenocortical (HPA) system activity, and thus physiological stress, during the period of hormone incorporation. Because human scalp hair grows at an average rate of 1 cm/month, CORT levels obtained from hair segments several cm in length can potentially serve as a biomarker of stress experienced over a number of months. In our method, each hair sample is first washed twice in isopropanol to remove any CORT from the outside of the hair shaft that has been deposited from sweat or sebum. After drying, the sample is ground to a fine powder to break up the hair's protein matrix and increase the surface area for extraction. CORT from the interior of the hair shaft is extracted into methanol, the methanol is evaporated, and the extract is reconstituted in assay buffer. Extracted CORT, along with standards and quality controls, is then analyzed by means of a sensitive and specific commercially available enzyme immunoassay (EIA) kit. Readout from the EIA is converted to pg CORT per mg powdered hair weight. This method has been used in our laboratory to analyze hair CORT in humans, several species of macaque monkeys, marmosets, dogs, and polar bears. Many studies both from our lab and from other research groups have demonstrated the broad applicability of hair CORT for assessing chronic stress exposure in natural as well as laboratory settings.
Basic Protocol, Issue 83, cortisol, hypothalamic-pituitary-adrenocortical axis, hair, stress, humans, monkeys
Play Button
Functional Imaging of Auditory Cortex in Adult Cats using High-field fMRI
Authors: Trecia A. Brown, Joseph S. Gati, Sarah M. Hughes, Pam L. Nixon, Ravi S. Menon, Stephen G. Lomber.
Institutions: University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario, University of Western Ontario.
Current knowledge of sensory processing in the mammalian auditory system is mainly derived from electrophysiological studies in a variety of animal models, including monkeys, ferrets, bats, rodents, and cats. In order to draw suitable parallels between human and animal models of auditory function, it is important to establish a bridge between human functional imaging studies and animal electrophysiological studies. Functional magnetic resonance imaging (fMRI) is an established, minimally invasive method of measuring broad patterns of hemodynamic activity across different regions of the cerebral cortex. This technique is widely used to probe sensory function in the human brain, is a useful tool in linking studies of auditory processing in both humans and animals and has been successfully used to investigate auditory function in monkeys and rodents. The following protocol describes an experimental procedure for investigating auditory function in anesthetized adult cats by measuring stimulus-evoked hemodynamic changes in auditory cortex using fMRI. This method facilitates comparison of the hemodynamic responses across different models of auditory function thus leading to a better understanding of species-independent features of the mammalian auditory cortex.
Neuroscience, Issue 84, Central Nervous System, Ear, Animal Experimentation, Models, Animal, Functional Neuroimaging, Brain Mapping, Nervous System, Sense Organs, auditory cortex, BOLD signal change, hemodynamic response, hearing, acoustic stimuli
Play Button
Feeding of Ticks on Animals for Transmission and Xenodiagnosis in Lyme Disease Research
Authors: Monica E. Embers, Britton J. Grasperge, Mary B. Jacobs, Mario T. Philipp.
Institutions: Tulane University Health Sciences Center.
Transmission of the etiologic agent of Lyme disease, Borrelia burgdorferi, occurs by the attachment and blood feeding of Ixodes species ticks on mammalian hosts. In nature, this zoonotic bacterial pathogen may use a variety of reservoir hosts, but the white-footed mouse (Peromyscus leucopus) is the primary reservoir for larval and nymphal ticks in North America. Humans are incidental hosts most frequently infected with B. burgdorferi by the bite of ticks in the nymphal stage. B. burgdorferi adapts to its hosts throughout the enzootic cycle, so the ability to explore the functions of these spirochetes and their effects on mammalian hosts requires the use of tick feeding. In addition, the technique of xenodiagnosis (using the natural vector for detection and recovery of an infectious agent) has been useful in studies of cryptic infection. In order to obtain nymphal ticks that harbor B. burgdorferi, ticks are fed live spirochetes in culture through capillary tubes. Two animal models, mice and nonhuman primates, are most commonly used for Lyme disease studies involving tick feeding. We demonstrate the methods by which these ticks can be fed upon, and recovered from animals for either infection or xenodiagnosis.
Infection, Issue 78, Medicine, Immunology, Infectious Diseases, Biomedical Engineering, Primates, Muridae, Ticks, Borrelia, Borrelia Infections, Ixodes, ticks, Lyme disease, xenodiagnosis, Borrelia, burgdorferi, mice, nonhuman primates, animal model
Play Button
Nonhuman Primate Lung Decellularization and Recellularization Using a Specialized Large-organ Bioreactor
Authors: Ryan W. Bonvillain, Michelle E. Scarritt, Nicholas C. Pashos, Jacques P. Mayeux, Christopher L. Meshberger, Aline M. Betancourt, Deborah E. Sullivan, Bruce A. Bunnell.
Institutions: Tulane University School of Medicine, Tulane National Primate Research Center, Tulane University School of Medicine, Tulane University School of Medicine.
There are an insufficient number of lungs available to meet current and future organ transplantation needs. Bioartificial tissue regeneration is an attractive alternative to classic organ transplantation. This technology utilizes an organ's natural biological extracellular matrix (ECM) as a scaffold onto which autologous or stem/progenitor cells may be seeded and cultured in such a way that facilitates regeneration of the original tissue. The natural ECM is isolated by a process called decellularization. Decellularization is accomplished by treating tissues with a series of detergents, salts, and enzymes to achieve effective removal of cellular material while leaving the ECM intact. Studies conducted utilizing decellularization and subsequent recellularization of rodent lungs demonstrated marginal success in generating pulmonary-like tissue which is capable of gas exchange in vivo. While offering essential proof-of-concept, rodent models are not directly translatable to human use. Nonhuman primates (NHP) offer a more suitable model in which to investigate the use of bioartificial organ production for eventual clinical use. The protocols for achieving complete decellularization of lungs acquired from the NHP rhesus macaque are presented. The resulting acellular lungs can be seeded with a variety of cells including mesenchymal stem cells and endothelial cells. The manuscript also describes the development of a bioreactor system in which cell-seeded macaque lungs can be cultured under conditions of mechanical stretch and strain provided by negative pressure ventilation as well as pulsatile perfusion through the vasculature; these forces are known to direct differentiation along pulmonary and endothelial lineages, respectively. Representative results of decellularization and cell seeding are provided.
Bioengineering, Issue 82, rhesus macaque, decellularization, recellularization, detergent, matrix, scaffold, large-organ bioreactor, mesenchymal stem cells
Play Button
Gradient Echo Quantum Memory in Warm Atomic Vapor
Authors: Olivier Pinel, Mahdi Hosseini, Ben M. Sparkes, Jesse L. Everett, Daniel Higginbottom, Geoff T. Campbell, Ping Koy Lam, Ben C. Buchler.
Institutions: The Australian National University.
Gradient echo memory (GEM) is a protocol for storing optical quantum states of light in atomic ensembles. The primary motivation for such a technology is that quantum key distribution (QKD), which uses Heisenberg uncertainty to guarantee security of cryptographic keys, is limited in transmission distance. The development of a quantum repeater is a possible path to extend QKD range, but a repeater will need a quantum memory. In our experiments we use a gas of rubidium 87 vapor that is contained in a warm gas cell. This makes the scheme particularly simple. It is also a highly versatile scheme that enables in-memory refinement of the stored state, such as frequency shifting and bandwidth manipulation. The basis of the GEM protocol is to absorb the light into an ensemble of atoms that has been prepared in a magnetic field gradient. The reversal of this gradient leads to rephasing of the atomic polarization and thus recall of the stored optical state. We will outline how we prepare the atoms and this gradient and also describe some of the pitfalls that need to be avoided, in particular four-wave mixing, which can give rise to optical gain.
Physics, Issue 81, quantum memory, photon echo, rubidium vapor, gas cell, optical memory, gradient echo memory (GEM)
Play Button
Assessment of Social Interaction Behaviors
Authors: Oksana Kaidanovich-Beilin, Tatiana Lipina, Igor Vukobradovic, John Roder, James R. Woodgett.
Institutions: Mount Sinai Hospital, Mount Sinai Hospital, University of Toronto, University of Toronto, University of Toronto.
Social interactions are a fundamental and adaptive component of the biology of numerous species. Social recognition is critical for the structure and stability of the networks and relationships that define societies. For animals, such as mice, recognition of conspecifics may be important for maintaining social hierarchy and for mate choice 1. A variety of neuropsychiatric disorders are characterized by disruptions in social behavior and social recognition, including depression, autism spectrum disorders, bipolar disorders, obsessive-compulsive disorders, and schizophrenia. Studies of humans as well as animal models (e.g., Drosophila melanogaster, Caenorhabditis elegans, Mus musculus, Rattus norvegicus) have identified genes involved in the regulation of social behavior 2. To assess sociability in animal models, several behavioral tests have been developed (reviewed in 3). Integrative research using animal models and appropriate tests for social behavior may lead to the development of improved treatments for social psychopathologies. The three-chamber paradigm test known as Crawley's sociability and preference for social novelty protocol has been successfully employed to study social affiliation and social memory in several inbred and mutant mouse lines (e.g. 4-7). The main principle of this test is based on the free choice by a subject mouse to spend time in any of three box's compartments during two experimental sessions, including indirect contact with one or two mice with which it is unfamiliar. To quantitate social tendencies of the experimental mouse, the main tasks are to measure a) the time spent with a novel conspecific and b) preference for a novel vs. a familiar conspecific. Thus, the experimental design of this test allows evaluation of two critical but distinguishable aspects of social behavior, such as social affiliation/motivation, as well as social memory and novelty. "Sociability" in this case is defined as propensity to spend time with another mouse, as compared to time spent alone in an identical but empty chamber 7. "Preference for social novelty" is defined as propensity to spend time with a previously unencountered mouse rather than with a familiar mouse 7. This test provides robust results, which then must be carefully analyzed, interpreted and supported/confirmed by alternative sociability tests. In addition to specific applications, Crawley's sociability test can be included as an important component of general behavioral screen of mutant mice.
Neuroscience, Issue 48, Mice, behavioral test, phenotyping, social interaction
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Measuring Attentional Biases for Threat in Children and Adults
Authors: Vanessa LoBue.
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Play Button
The Crossmodal Congruency Task as a Means to Obtain an Objective Behavioral Measure in the Rubber Hand Illusion Paradigm
Authors: Regine Zopf, Greg Savage, Mark A. Williams.
Institutions: Macquarie University, Macquarie University, Macquarie University.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm. The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI. The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Psychology, Behavior and Behavior Mechanisms, Psychological Phenomena and Processes, Behavioral Sciences, rubber hand illusion, crossmodal congruency task, crossmodal congruency effect, multisensory processing, body ownership, peripersonal space, clinical techniques
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
A Video Demonstration of Preserved Piloting by Scent Tracking but Impaired Dead Reckoning After Fimbria-Fornix Lesions in the Rat
Authors: Ian Q. Whishaw, Boguslaw P. Gorny.
Institutions: Canadian Centre for Behavioural Neuroscience, University of Lethbridge.
Piloting and dead reckoning navigation strategies use very different cue constellations and computational processes (Darwin, 1873; Barlow, 1964; O’Keefe and Nadel, 1978; Mittelstaedt and Mittelstaedt, 1980; Landeau et al., 1984; Etienne, 1987; Gallistel, 1990; Maurer and Séguinot, 1995). Piloting requires the use of the relationships between relatively stable external (visual, olfactory, auditory) cues, whereas dead reckoning requires the integration of cues generated by self-movement. Animals obtain self-movement information from vestibular receptors, and possibly muscle and joint receptors, and efference copy of commands that generate movement. An animal may also use the flows of visual, auditory, and olfactory stimuli caused by its movements. Using a piloting strategy an animal can use geometrical calculations to determine directions and distances to places in its environment, whereas using an dead reckoning strategy it can integrate cues generated by its previous movements to return to a just left location. Dead reckoning is colloquially called "sense of direction" and "sense of distance." Although there is considerable evidence that the hippocampus is involved in piloting (O’Keefe and Nadel, 1978; O’Keefe and Speakman, 1987), there is also evidence from behavioral (Whishaw et al., 1997; Whishaw and Maaswinkel, 1998; Maaswinkel and Whishaw, 1999), modeling (Samsonovich and McNaughton, 1997), and electrophysiological (O’Mare et al., 1994; Sharp et al., 1995; Taube and Burton, 1995; Blair and Sharp, 1996; McNaughton et al., 1996; Wiener, 1996; Golob and Taube, 1997) studies that the hippocampal formation is involved in dead reckoning. The relative contribution of the hippocampus to the two forms of navigation is still uncertain, however. Ordinarily, it is difficult to be certain that an animal is using a piloting versus a dead reckoning strategy because animals are very flexible in their use of strategies and cues (Etienne et al., 1996; Dudchenko et al., 1997; Martin et al., 1997; Maaswinkel and Whishaw, 1999). The objective of the present video demonstrations was to solve the problem of cue specification in order to examine the relative contribution of the hippocampus in the use of these strategies. The rats were trained in a new task in which they followed linear or polygon scented trails to obtain a large food pellet hidden on an open field. Because rats have a proclivity to carry the food back to the refuge, accuracy and the cues used to return to the home base were dependent variables (Whishaw and Tomie, 1997). To force an animal to use a a dead reckoning strategy to reach its refuge with the food, the rats were tested when blindfolded or under infrared light, a spectral wavelength in which they cannot see, and in some experiments the scent trail was additionally removed once an animal reached the food. To examine the relative contribution of the hippocampus, fimbria–fornix (FF) lesions, which disrupt information flow in the hippocampal formation (Bland, 1986), impair memory (Gaffan and Gaffan, 1991), and produce spatial deficits (Whishaw and Jarrard, 1995), were used.
Neuroscience, Issue 26, Dead reckoning, fimbria-fornix, hippocampus, odor tracking, path integration, spatial learning, spatial navigation, piloting, rat, Canadian Centre for Behavioural Neuroscience
Play Button
Recording Single Neurons' Action Potentials from Freely Moving Pigeons Across Three Stages of Learning
Authors: Sarah Starosta, Maik C. Stüttgen, Onur Güntürkün.
Institutions: Ruhr-University Bochum.
While the subject of learning has attracted immense interest from both behavioral and neural scientists, only relatively few investigators have observed single-neuron activity while animals are acquiring an operantly conditioned response, or when that response is extinguished. But even in these cases, observation periods usually encompass only a single stage of learning, i.e. acquisition or extinction, but not both (exceptions include protocols employing reversal learning; see Bingman et al.1 for an example). However, acquisition and extinction entail different learning mechanisms and are therefore expected to be accompanied by different types and/or loci of neural plasticity. Accordingly, we developed a behavioral paradigm which institutes three stages of learning in a single behavioral session and which is well suited for the simultaneous recording of single neurons' action potentials. Animals are trained on a single-interval forced choice task which requires mapping each of two possible choice responses to the presentation of different novel visual stimuli (acquisition). After having reached a predefined performance criterion, one of the two choice responses is no longer reinforced (extinction). Following a certain decrement in performance level, correct responses are reinforced again (reacquisition). By using a new set of stimuli in every session, animals can undergo the acquisition-extinction-reacquisition process repeatedly. Because all three stages of learning occur in a single behavioral session, the paradigm is ideal for the simultaneous observation of the spiking output of multiple single neurons. We use pigeons as model systems, but the task can easily be adapted to any other species capable of conditioned discrimination learning.
Neuroscience, Issue 88, pigeon, single unit recording, learning, memory, extinction, spike sorting, operant conditioning, reward, electrophysiology, animal cognition, model species
Play Button
A Highly Reproducible and Straightforward Method to Perform In Vivo Ocular Enucleation in the Mouse after Eye Opening
Authors: Jeroen Aerts, Julie Nys, Lutgarde Arckens.
Institutions: KU Leuven - University of Leuven.
Enucleation or the surgical removal of an eye can generally be considered as a model for nerve deafferentation. It provides a valuable tool to study the different aspects of visual, cross-modal and developmental plasticity along the mammalian visual system1-4. Here, we demonstrate an elegant and straightforward technique for the removal of one or both eyes in the mouse, which is validated in mice of 20 days old up to adults. Briefly, a disinfected curved forceps is used to clamp the optic nerve behind the eye. Subsequently, circular movements are performed to constrict the optic nerve and remove the eyeball. The advantages of this technique are high reproducibility, minimal to no bleeding, rapid post-operative recovery and a very low learning threshold for the experimenter. Hence, a large amount of animals can be manipulated and processed with minimal amount of effort. The nature of the technique may induce slight damage to the retina during the procedure. This side effect makes this method less suitable as compared to Mahajan et al. (2011)5 if the goal is to collect and analyze retinal tissue. Also, our method is limited to post-eye opening ages (mouse: P10 - 13 onwards) since the eyeball needs to be displaced from the socket without removing the eyelids. The in vivo enucleation technique described in this manuscript has recently been successfully applied with minor modifications in rats and appears useful to study the afferent visual pathway of rodents in general.
Anatomy, Issue 92, Deprivation, visual system, eye, optic nerve, rodent, mouse, neuroplasticity, neuroscience
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Play Button
Computer-Generated Animal Model Stimuli
Authors: Kevin L. Woo.
Institutions: Macquarie University.
Communication between animals is diverse and complex. Animals may communicate using auditory, seismic, chemosensory, electrical, or visual signals. In particular, understanding the constraints on visual signal design for communication has been of great interest. Traditional methods for investigating animal interactions have used basic observational techniques, staged encounters, or physical manipulation of morphology. Less intrusive methods have tried to simulate conspecifics using crude playback tools, such as mirrors, still images, or models. As technology has become more advanced, video playback has emerged as another tool in which to examine visual communication (Rosenthal, 2000). However, to move one step further, the application of computer-animation now allows researchers to specifically isolate critical components necessary to elicit social responses from conspecifics, and manipulate these features to control interactions. Here, I provide detail on how to create an animation using the Jacky dragon as a model, but this process may be adaptable for other species. In building the animation, I elected to use Lightwave 3D to alter object morphology, add texture, install bones, and provide comparable weight shading that prevents exaggerated movement. The animation is then matched to select motor patterns to replicate critical movement features. Finally, the sequence must rendered into an individual clip for presentation. Although there are other adaptable techniques, this particular method had been demonstrated to be effective in eliciting both conspicuous and social responses in staged interactions.
Neuroscience, Issue 6, behavior, lizard, simulation, animation
Play Button
Behavioral Assessment of Manual Dexterity in Non-Human Primates
Authors: Eric Schmidlin, Mélanie Kaeser, Anne- Dominique Gindrat, Julie Savidan, Pauline Chatagny, Simon Badoud, Adjia Hamadjida, Marie-Laure Beaud, Thierry Wannier, Abderraouf Belhaj-Saif, Eric M. Rouiller.
Institutions: University of Fribourg.
The corticospinal (CS) tract is the anatomical support of the exquisite motor ability to skillfully manipulate small objects, a prerogative mainly of primates1. In case of lesion affecting the CS projection system at its origin (lesion of motor cortical areas) or along its trajectory (cervical cord lesion), there is a dramatic loss of manual dexterity (hand paralysis), as seen in some tetraplegic or hemiplegic patients. Although there is some spontaneous functional recovery after such lesion, it remains very limited in the adult. Various therapeutic strategies are presently proposed (e.g. cell therapy, neutralization of inhibitory axonal growth molecules, application of growth factors, etc), which are mostly developed in rodents. However, before clinical application, it is often recommended to test the feasibility, efficacy, and security of the treatment in non-human primates. This is especially true when the goal is to restore manual dexterity after a lesion of the central nervous system, as the organization of the motor system of rodents is different from that of primates1,2. Macaque monkeys are illustrated here as a suitable behavioral model to quantify manual dexterity in primates, to reflect the deficits resulting from lesion of the motor cortex or cervical cord for instance, measure the extent of spontaneous functional recovery and, when a treatment is applied, evaluate how much it can enhance the functional recovery. The behavioral assessment of manual dexterity is based on four distinct, complementary, reach and grasp manual tasks (use of precision grip to grasp pellets), requiring an initial training of adult macaque monkeys. The preparation of the animals is demonstrated, as well as the positioning with respect to the behavioral set-up. The performance of a typical monkey is illustrated for each task. The collection and analysis of relevant parameters reflecting precise hand manipulation, as well as the control of force, are explained and demonstrated with representative results. These data are placed then in a broader context, showing how the behavioral data can be exploited to investigate the impact of a spinal cord lesion or of a lesion of the motor cortex and to what extent a treatment may enhance the spontaneous functional recovery, by comparing different groups of monkeys (treated versus sham treated for instance). Advantages and limitations of the behavioral tests are discussed. The present behavioral approach is in line with previous reports emphasizing the pertinence of the non-human primate model in the context of nervous system diseases2,3.
Neuroscience, Issue 57, monkey, hand, spinal cord lesion, cerebral cortex lesion, functional recovery
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.