JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Duality in binocular rivalry: distinct sensitivity of percept sequence and percept duration to imbalance between monocular stimuli.
PLoS ONE
PUBLISHED: 05-17-2009
Visual perception is usually stable and accurate. However, when the two eyes are simultaneously presented with conflicting stimuli, perception falls into a sequence of spontaneous alternations, switching between one stimulus and the other every few seconds. Known as binocular rivalry, this visual illusion decouples subjective experience from physical stimulation and provides a unique opportunity to study the neural correlates of consciousness. The temporal properties of this alternating perception have been intensively investigated for decades, yet the relationship between two fundamental properties - the sequence of percepts and the duration of each percept - remains largely unexplored.
Authors: David Carmel, Michael Arcaro, Sabine Kastner, Uri Hasson.
Published: 11-10-2010
ABSTRACT
Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles.
21 Related JoVE Articles!
Play Button
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Authors: Jillian Nguyen, Thomas V. Papathomas, Jay H. Ravaliya, Elizabeth B. Torres.
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
51422
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
51107
Play Button
A Novel Approach for Documenting Phosphenes Induced by Transcranial Magnetic Stimulation
Authors: Seth Elkin-Frankston, Peter J. Fried, Alvaro Pascual-Leone, R. J. Rushmore III, Antoni Valero-Cabré.
Institutions: Boston University School of Medicine, Beth Israel Deaconess Med Center, Centre National de la Recherche Scientifique (CNRS).
Stimulation of the human visual cortex produces a transient perception of light, known as a phosphene. Phosphenes are induced by invasive electrical stimulation of the occipital cortex, but also by non-invasive Transcranial Magnetic Stimulation (TMS)1 of the same cortical regions. The intensity at which a phosphene is induced (phosphene threshold) is a well established measure of visual cortical excitability and is used to study cortico-cortical interactions, functional organization 2, susceptibility to pathology 3,4 and visual processing 5-7. Phosphenes are typically defined by three characteristics: they are observed in the visual hemifield contralateral to stimulation; they are induced when the subject s eyes are open or closed, and their spatial location changes with the direction of gaze 2. Various methods have been used to document phosphenes, but a standardized methodology is lacking. We demonstrate a reliable procedure to obtain phosphene threshold values and introduce a novel system for the documentation and analysis of phosphenes. We developed the Laser Tracking and Painting system (LTaP), a low cost, easily built and operated system that records the location and size of perceived phosphenes in real-time. The LTaP system provides a stable and customizable environment for quantification and analysis of phosphenes.
Neuroscience, Issue 38, Transcranial Magnetic Stimulation (TMS), Phosphenes, Occipital, Human visual cortex, Threshold
1762
Play Button
Testing Visual Sensitivity to the Speed and Direction of Motion in Lizards
Authors: Kevin L. Woo.
Institutions: Macquarie University.
Testing visual sensitivity in any species provides basic information regarding behaviour, evolution, and ecology. However, testing specific features of the visual system provide more empirical evidence for functional applications. Investigation into the sensory system provides information about the sensory capacity, learning and memory ability, and establishes known baseline behaviour in which to gauge deviations (Burghardt, 1977). However, unlike mammalian or avian systems, testing for learning and memory in a reptile species is difficult. Furthermore, using an operant paradigm as a psychophysical measure of sensory ability is likewise as difficult. Historically, reptilian species have responded poorly to conditioning trials because of issues related to motivation, physiology, metabolism, and basic biological characteristics. Here, I demonstrate an operant paradigm used a novel model lizard species, the Jacky dragon (Amphibolurus muricatus) and describe how to test peripheral sensitivity to salient speed and motion characteristics. This method uses an innovative approach to assessing learning and sensory capacity in lizards. I employ the use of random-dot kinematograms (RDKs) to measure sensitivity to speed, and manipulate the level of signal strength by changing the proportion of dots moving in a coherent direction. RDKs do not represent a biologically meaningful stimulus, engages the visual system, and is a classic psychophysical tool used to measure sensitivity in humans and other animals. Here, RDKs are displayed to lizards using three video playback systems. Lizards are to select the direction (left or right) in which they perceive dots to be moving. Selection of the appropriate direction is reinforced by biologically important prey stimuli, simulated by computer-animated invertebrates.
Neuroscience, Issue 2, Visual sensitivity, motion perception, operant conditioning, speed, coherence, Jacky dragon (Amphibolurus muricatus)
127
Play Button
The Measurement and Treatment of Suppression in Amblyopia
Authors: Joanna M. Black, Robert F. Hess, Jeremy R. Cooperstock, Long To, Benjamin Thompson.
Institutions: University of Auckland, McGill University , McGill University .
Amblyopia, a developmental disorder of the visual cortex, is one of the leading causes of visual dysfunction in the working age population. Current estimates put the prevalence of amblyopia at approximately 1-3%1-3, the majority of cases being monocular2. Amblyopia is most frequently caused by ocular misalignment (strabismus), blur induced by unequal refractive error (anisometropia), and in some cases by form deprivation. Although amblyopia is initially caused by abnormal visual input in infancy, once established, the visual deficit often remains when normal visual input has been restored using surgery and/or refractive correction. This is because amblyopia is the result of abnormal visual cortex development rather than a problem with the amblyopic eye itself4,5 . Amblyopia is characterized by both monocular and binocular deficits6,7 which include impaired visual acuity and poor or absent stereopsis respectively. The visual dysfunction in amblyopia is often associated with a strong suppression of the inputs from the amblyopic eye under binocular viewing conditions8. Recent work has indicated that suppression may play a central role in both the monocular and binocular deficits associated with amblyopia9,10 . Current clinical tests for suppression tend to verify the presence or absence of suppression rather than giving a quantitative measurement of the degree of suppression. Here we describe a technique for measuring amblyopic suppression with a compact, portable device11,12 . The device consists of a laptop computer connected to a pair of virtual reality goggles. The novelty of the technique lies in the way we present visual stimuli to measure suppression. Stimuli are shown to the amblyopic eye at high contrast while the contrast of the stimuli shown to the non-amblyopic eye are varied. Patients perform a simple signal/noise task that allows for a precise measurement of the strength of excitatory binocular interactions. The contrast offset at which neither eye has a performance advantage is a measure of the "balance point" and is a direct measure of suppression. This technique has been validated psychophysically both in control13,14 and patient6,9,11 populations. In addition to measuring suppression this technique also forms the basis of a novel form of treatment to decrease suppression over time and improve binocular and often monocular function in adult patients with amblyopia12,15,16 . This new treatment approach can be deployed either on the goggle system described above or on a specially modified iPod touch device15.
Medicine, Issue 70, Ophthalmology, Neuroscience, Anatomy, Physiology, Amblyopia, suppression, visual cortex, binocular vision, plasticity, strabismus, anisometropia
3927
Play Button
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Authors: Jason Bell, Edwin Dickinson, David R. Badcock, Frederick A. A. Kingdom.
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
50877
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
2108
Play Button
A Highly Reproducible and Straightforward Method to Perform In Vivo Ocular Enucleation in the Mouse after Eye Opening
Authors: Jeroen Aerts, Julie Nys, Lutgarde Arckens.
Institutions: KU Leuven - University of Leuven.
Enucleation or the surgical removal of an eye can generally be considered as a model for nerve deafferentation. It provides a valuable tool to study the different aspects of visual, cross-modal and developmental plasticity along the mammalian visual system1-4. Here, we demonstrate an elegant and straightforward technique for the removal of one or both eyes in the mouse, which is validated in mice of 20 days old up to adults. Briefly, a disinfected curved forceps is used to clamp the optic nerve behind the eye. Subsequently, circular movements are performed to constrict the optic nerve and remove the eyeball. The advantages of this technique are high reproducibility, minimal to no bleeding, rapid post-operative recovery and a very low learning threshold for the experimenter. Hence, a large amount of animals can be manipulated and processed with minimal amount of effort. The nature of the technique may induce slight damage to the retina during the procedure. This side effect makes this method less suitable as compared to Mahajan et al. (2011)5 if the goal is to collect and analyze retinal tissue. Also, our method is limited to post-eye opening ages (mouse: P10 - 13 onwards) since the eyeball needs to be displaced from the socket without removing the eyelids. The in vivo enucleation technique described in this manuscript has recently been successfully applied with minor modifications in rats and appears useful to study the afferent visual pathway of rodents in general.
Anatomy, Issue 92, Deprivation, visual system, eye, optic nerve, rodent, mouse, neuroplasticity, neuroscience
51936
Play Button
State-Dependency Effects on TMS: A Look at Motive Phosphene Behavior
Authors: Umer Najib, Jared C. Horvath, Juha Silvanto, Alvaro Pascual-Leone.
Institutions: Beth Israel Deaconess Medical Center, Aalto University School of Science and Technology.
Transcranial magnetic stimulation (TMS) is a non-invasive neurostimulatory and neuromodulatory technique that can transiently or lastingly modulate cortical excitability (either increasing or decreasing it) via the application of localized magnetic field pulses.1,2 Within the field of TMS, the term state dependency refers to the initial, baseline condition of the particular neural region targeted for stimulation. As can be inferred, the effects of TMS can (and do) vary according to this primary susceptibility and responsiveness of the targeted cortical area.3,4,5 In this experiment, we will examine this concept of state dependency through the elicitation and subjective experience of motive phosphenes. Phosphenes are visually perceived flashes of small lights triggered by electromagnetic pulses to the visual cortex. These small lights can assume varied characteristics depending upon which type of visual cortex is being stimulated. In this particular study, we will be targeting motive phosphenes as elicited through the stimulation of V1/V2 and the V5/MT+ complex visual regions.6
Neuroscience, Issue 46, Transcranial Magnetic Stimulation, state dependency, motive phosphenes, visual priming, V1/V2, V5/MT+
2273
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
51735
Play Button
The Crossmodal Congruency Task as a Means to Obtain an Objective Behavioral Measure in the Rubber Hand Illusion Paradigm
Authors: Regine Zopf, Greg Savage, Mark A. Williams.
Institutions: Macquarie University, Macquarie University, Macquarie University.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm. The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI. The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Psychology, Behavior and Behavior Mechanisms, Psychological Phenomena and Processes, Behavioral Sciences, rubber hand illusion, crossmodal congruency task, crossmodal congruency effect, multisensory processing, body ownership, peripersonal space, clinical techniques
50530
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
52374
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
51057
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Stimulating the Lip Motor Cortex with Transcranial Magnetic Stimulation
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Institutions: University of Oxford.
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
Behavior, Issue 88, electromyography, motor cortex, motor evoked potential, motor excitability, speech, repetitive TMS, rTMS, virtual lesion, transcranial magnetic stimulation
51665
Play Button
The Optokinetic Response as a Quantitative Measure of Visual Acuity in Zebrafish
Authors: Donald Joshua Cameron, Faydim Rassamdana, Peony Tam, Kathleen Dang, Carolina Yanez, Saman Ghaemmaghami, Mahsa Iranpour Dehkordi.
Institutions: Western University of Health Sciences, Western University of Health Sciences, Western University of Health Sciences.
Zebrafish are a proven model for vision research, however many of the earlier methods generally focused on larval fish or demonstrated a simple response. More recently adult visual behavior in zebrafish has become of interest, but methods to measure specific responses are new coming. To address this gap, we set out to develop a methodology to repeatedly and accurately utilize the optokinetic response (OKR) to measure visual acuity in adult zebrafish. Here we show that the adult zebrafish's visual acuity can be measured, including both binocular and monocular acuities. Because the fish is not harmed during the procedure, the visual acuity can be measured and compared over short or long periods of time. The visual acuity measurements described here can also be done quickly allowing for high throughput and for additional visual procedures if desired. This type of analysis is conducive to drug intervention studies or investigations of disease progression.
Neuroscience, Issue 80, Zebrafish, Eye Movements, Visual Acuity, optokinetic, behavior, adult
50832
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
3307
Play Button
A Low Cost Setup for Behavioral Audiometry in Rodents
Authors: Konstantin Tziridis, Sönke Ahlf, Holger Schulze.
Institutions: University of Erlangen-Nuremberg.
In auditory animal research it is crucial to have precise information about basic hearing parameters of the animal subjects that are involved in the experiments. Such parameters may be physiological response characteristics of the auditory pathway, e.g. via brainstem audiometry (BERA). But these methods allow only indirect and uncertain extrapolations about the auditory percept that corresponds to these physiological parameters. To assess the perceptual level of hearing, behavioral methods have to be used. A potential problem with the use of behavioral methods for the description of perception in animal models is the fact that most of these methods involve some kind of learning paradigm before the subjects can be behaviorally tested, e.g. animals may have to learn to press a lever in response to a sound. As these learning paradigms change perception itself 1,2 they consequently will influence any result about perception obtained with these methods and therefore have to be interpreted with caution. Exceptions are paradigms that make use of reflex responses, because here no learning paradigms have to be carried out prior to perceptual testing. One such reflex response is the acoustic startle response (ASR) that can highly reproducibly be elicited with unexpected loud sounds in naïve animals. This ASR in turn can be influenced by preceding sounds depending on the perceptibility of this preceding stimulus: Sounds well above hearing threshold will completely inhibit the amplitude of the ASR; sounds close to threshold will only slightly inhibit the ASR. This phenomenon is called pre-pulse inhibition (PPI) 3,4, and the amount of PPI on the ASR gradually depends on the perceptibility of the pre-pulse. PPI of the ASR is therefore well suited to determine behavioral audiograms in naïve, non-trained animals, to determine hearing impairments or even to detect possible subjective tinnitus percepts in these animals. In this paper we demonstrate the use of this method in a rodent model (cf. also ref. 5), the Mongolian gerbil (Meriones unguiculatus), which is a well know model species for startle response research within the normal human hearing range (e.g. 6).
Neuroscience, Issue 68, Physiology, Anatomy, Medicine, otolaryngology, behavior, auditory startle response, pre-pulse inhibition, audiogram, tinnitus, hearing loss
4433
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.