JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Eye movements reveal effects of visual content on eye guidance and lexical access during reading.
Normal reading requires eye guidance and activation of lexical representations so that words in text can be identified accurately. However, little is known about how the visual content of text supports eye guidance and lexical activation, and thereby enables normal reading to take place.
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Published: 01-10-2014
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
22 Related JoVE Articles!
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Heterotopic Heart Transplantation in Mice
Authors: Fengchun Liu, Sang Mo Kang.
Institutions: University of California, San Francisco - UCSF.
The mouse heterotopic heart transplantation has been used widely since it was introduced by Drs. Corry and Russell in 1973. It is particularly valuable for studying rejection and immune response now that newer transgenic and gene knockout mice are available, and a large number of immunologic reagents have been developed. The heart transplant model is less stringent than the skin transplant models, although technically more challenging. We have developed a modified technique and have completed over 1000 successful cases of heterotopic heart transplantation in mice. When making anastomosis of the ascending aorta and abdominal aorta, two stay sutures are placed at the proximal and distal apexes of recipient abdominal aorta with the donor s ascending aorta, then using 11-0 suture for anastomosis on both side of aorta with continuing sutures. The stay sutures make the anastomosis easier and 11-0 is an ideal suture size to avoid bleeding and thrombosis. When making anastomosis of pulmonary artery and inferior vena cava, two stay sutures are made at the proximal apex and distal apex of the recipient s inferior vena cava with the donor s pulmonary artery. The left wall of the inferior vena cava and donor s pulmonary artery is closed with continuing sutures in the inside of the inferior vena cava after, one knot with the proximal apex stay suture the right wall of the inferior vena cava and the donor s pulmonary artery are closed with continuing sutures outside the inferior vena cave with 10-0 sutures. This method is easier to perform because anastomosis is made just on the one side of the inferior vena cava and 10-0 sutures is the right size to avoid bleeding and thrombosis. In this article, we provide details of the technique to supplement the video.
Developmental Biology, Issue 6, Microsurgical Techniques, Heart Transplant, Allograft Rejection Model
Play Button
Intravitreous Injection for Establishing Ocular Diseases Model
Authors: Kin Chiu, Raymond Chuen-Chung Chang, Kwok-Fai So.
Institutions: The University of Hong Kong - HKU.
Intravitreous injection is a widely used technique in visual sciences research. It can be used to establish animal models with ocular diseases or as direct application of local treatment. This video introduces how to use simple and inexpensive tools to finish the intravitreous injection procedure. Use of a 1 ml syringe, instead of a hemilton syringe, is used. Practical tips for how to make appropriate injection needles using glass pipettes with perfect tips, and how to easily connect the syringe needle with the glass pipette tightly together, are given. To conduct a good intravitreous injection, there are three aspects to be observed: 1) injection site should not disrupt retina structure; 2) bleeding should be avoided to reduce the risk of infection; 3) lens should be untouched to avoid traumatic cataract. In brief, the most important point is to reduce the interruption of normal ocular structure. To avoid interruption of retina, the superior nasal region of rat eye was chosen. Also, the puncture point of the needle was at the par planar, which was about 1.5 mm from the limbal region of the rat eye. A small amount of vitreous is gently pushed out through the puncture hole to reduce the intraocular pressure before injection. With the 45° injection angle, it is less likely to cause traumatic cataract in the rat eye, thus avoiding related complications and influence from lenticular factors. In this operation, there was no cutting of the conjunctiva and ocular muscle, no bleeding. With quick and minor injury, a successful intravitreous injection can be done in minutes. The injection set outlined in this particular protocol is specific for intravitreous injection. However, the methods and materials presented here can also be used for other injection procedures in drug delivery to the brain, spinal cord or other organs in small mammals.
Neuroscience, Issue 8, eye, injection, rat
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
Play Button
Three Dimensional Vestibular Ocular Reflex Testing Using a Six Degrees of Freedom Motion Platform
Authors: Joyce Dits, Mark M.J. Houben, Johannes van der Steen.
Institutions: Erasmus MC, TNO Human Factors.
The vestibular organ is a sensor that measures angular and linear accelerations with six degrees of freedom (6DF). Complete or partial defects in the vestibular organ results in mild to severe equilibrium problems, such as vertigo, dizziness, oscillopsia, gait unsteadiness nausea and/or vomiting. A good and frequently used measure to quantify gaze stabilization is the gain, which is defined as the magnitude of compensatory eye movements with respect to imposed head movements. To test vestibular function more fully one has to realize that 3D VOR ideally generates compensatory ocular rotations not only with a magnitude (gain) equal and opposite to the head rotation but also about an axis that is co-linear with the head rotation axis (alignment). Abnormal vestibular function thus results in changes in gain and changes in alignment of the 3D VOR response. Here we describe a method to measure 3D VOR using whole body rotation on a 6DF motion platform. Although the method also allows testing translation VOR responses 1, we limit ourselves to a discussion of the method to measure 3D angular VOR. In addition, we restrict ourselves here to description of data collected in healthy subjects in response to angular sinusoidal and impulse stimulation. Subjects are sitting upright and receive whole-body small amplitude sinusoidal and constant acceleration impulses. Sinusoidal stimuli (f = 1 Hz, A = 4°) were delivered about the vertical axis and about axes in the horizontal plane varying between roll and pitch at increments of 22.5° in azimuth. Impulses were delivered in yaw, roll and pitch and in the vertical canal planes. Eye movements were measured using the scleral search coil technique 2. Search coil signals were sampled at a frequency of 1 kHz. The input-output ratio (gain) and misalignment (co-linearity) of the 3D VOR were calculated from the eye coil signals 3. Gain and co-linearity of 3D VOR depended on the orientation of the stimulus axis. Systematic deviations were found in particular during horizontal axis stimulation. In the light the eye rotation axis was properly aligned with the stimulus axis at orientations 0° and 90° azimuth, but gradually deviated more and more towards 45° azimuth. The systematic deviations in misalignment for intermediate axes can be explained by a low gain for torsion (X-axis or roll-axis rotation) and a high gain for vertical eye movements (Y-axis or pitch-axis rotation (see Figure 2). Because intermediate axis stimulation leads a compensatory response based on vector summation of the individual eye rotation components, the net response axis will deviate because the gain for X- and Y-axis are different. In darkness the gain of all eye rotation components had lower values. The result was that the misalignment in darkness and for impulses had different peaks and troughs than in the light: its minimum value was reached for pitch axis stimulation and its maximum for roll axis stimulation. Case Presentation Nine subjects participated in the experiment. All subjects gave their informed consent. The experimental procedure was approved by the Medical Ethics Committee of Erasmus University Medical Center and adhered to the Declaration of Helsinki for research involving human subjects. Six subjects served as controls. Three subjects had a unilateral vestibular impairment due to a vestibular schwannoma. The age of control subjects (six males and three females) ranged from 22 to 55 years. None of the controls had visual or vestibular complaints due to neurological, cardio vascular and ophthalmic disorders. The age of the patients with schwannoma varied between 44 and 64 years (two males and one female). All schwannoma subjects were under medical surveillance and/or had received treatment by a multidisciplinary team consisting of an othorhinolaryngologist and a neurosurgeon of the Erasmus University Medical Center. Tested patients all had a right side vestibular schwannoma and underwent a wait and watch policy (Table 1; subjects N1-N3) after being diagnosed with vestibular schwannoma. Their tumors had been stabile for over 8-10 years on magnetic resonance imaging.
Neurobiology, Issue 75, Neuroscience, Medicine, Anatomy, Physiology, Biomedical Engineering, Ophthalmology, vestibulo ocular reflex, eye movements, torsion, balance disorders, rotation translation, equilibrium, eye rotation, motion, body rotation, vestibular organ, clinical techniques
Play Button
How to Create and Use Binocular Rivalry
Authors: David Carmel, Michael Arcaro, Sabine Kastner, Uri Hasson.
Institutions: New York University, New York University, Princeton University, Princeton University.
Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles.
Neuroscience, Issue 45, Binocular rivalry, continuous flash suppression, vision, visual awareness, perceptual competition, unconscious processing, neuroimaging
Play Button
Driving Simulation in the Clinic: Testing Visual Exploratory Behavior in Daily Life Activities in Patients with Visual Field Defects
Authors: Johanna Hamel, Antje Kraft, Sven Ohl, Sophie De Beukelaer, Heinrich J. Audebert, Stephan A. Brandt.
Institutions: Universitätsmedizin Charité, Universitätsmedizin Charité, Humboldt Universität zu Berlin.
Patients suffering from homonymous hemianopia after infarction of the posterior cerebral artery (PCA) report different degrees of constraint in daily life, despite similar visual deficits. We assume this could be due to variable development of compensatory strategies such as altered visual scanning behavior. Scanning compensatory therapy (SCT) is studied as part of the visual training after infarction next to vision restoration therapy. SCT consists of learning to make larger eye movements into the blind field enlarging the visual field of search, which has been proven to be the most useful strategy1, not only in natural search tasks but also in mastering daily life activities2. Nevertheless, in clinical routine it is difficult to identify individual levels and training effects of compensatory behavior, since it requires measurement of eye movements in a head unrestrained condition. Studies demonstrated that unrestrained head movements alter the visual exploratory behavior compared to a head-restrained laboratory condition3. Martin et al.4 and Hayhoe et al.5 showed that behavior demonstrated in a laboratory setting cannot be assigned easily to a natural condition. Hence, our goal was to develop a study set-up which uncovers different compensatory oculomotor strategies quickly in a realistic testing situation: Patients are tested in the clinical environment in a driving simulator. SILAB software (Wuerzburg Institute for Traffic Sciences GmbH (WIVW)) was used to program driving scenarios of varying complexity and recording the driver's performance. The software was combined with a head mounted infrared video pupil tracker, recording head- and eye-movements (EyeSeeCam, University of Munich Hospital, Clinical Neurosciences). The positioning of the patient in the driving simulator and the positioning, adjustment and calibration of the camera is demonstrated. Typical performances of a patient with and without compensatory strategy and a healthy control are illustrated in this pilot study. Different oculomotor behaviors (frequency and amplitude of eye- and head-movements) are evaluated very quickly during the drive itself by dynamic overlay pictures indicating where the subjects gaze is located on the screen, and by analyzing the data. Compensatory gaze behavior in a patient leads to a driving performance comparable to a healthy control, while the performance of a patient without compensatory behavior is significantly worse. The data of eye- and head-movement-behavior as well as driving performance are discussed with respect to different oculomotor strategies and in a broader context with respect to possible training effects throughout the testing session and implications on rehabilitation potential.
Medicine, Issue 67, Neuroscience, Physiology, Anatomy, Ophthalmology, compensatory oculomotor behavior, driving simulation, eye movements, homonymous hemianopia, stroke, visual field defects, visual field enlargement
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Play Button
Psychophysiological Stress Assessment Using Biofeedback
Authors: Inna Khazan.
Institutions: Cambridge Health Alliance, Harvard Medical School.
In the last half century, research in biofeedback has shown the extent to which the human mind can influence the functioning of the autonomic nervous system, previously thought to be outside of conscious control. By letting people observe signals from their own bodies, biofeedback enables them to develop greater awareness of their physiological and psychological reactions, such as stress, and to learn to modify these reactions. Biofeedback practitioners can facilitate this process by assessing people s reactions to mildly stressful events and formulating a biofeedback-based treatment plan. During stress assessment the practitioner first records a baseline for physiological readings, and then presents the client with several mild stressors, such as a cognitive, physical and emotional stressor. Variety of stressors is presented in order to determine a person's stimulus-response specificity, or differences in each person's reaction to qualitatively different stimuli. This video will demonstrate the process of psychophysiological stress assessment using biofeedback and present general guidelines for treatment planning.
Neuroscience, Issue 29, Stress, biofeedback, psychophysiological, assessment
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Techniques for Processing Eyes Implanted With a Retinal Prosthesis for Localized Histopathological Analysis
Authors: David A. X. Nayagam, Ceara McGowan, Joel Villalobos, Richard A. Williams, Cesar Salinas-LaRosa, Penny McKelvie, Irene Lo, Meri Basa, Justin Tan, Chris E. Williams.
Institutions: Bionics Institute, St Vincent's Hospital Melbourne, University of Melbourne, University of Melbourne.
With the recent development of retinal prostheses, it is important to develop reliable techniques for assessing the safety of these devices in preclinical studies. However, the standard fixation, preparation, and automated histology procedures are not ideal. Here we describe new procedures for evaluating the health of the retina directly adjacent to an implant. Retinal prostheses feature electrode arrays in contact with eye tissue. Previous methods have not been able to spatially localize the ocular tissue adjacent to individual electrodes within the array. In addition, standard histological processing often results in gross artifactual detachment of the retinal layers when assessing implanted eyes. Consequently, it has been difficult to assess localized damage, if present, caused by implantation and stimulation of an implanted electrode array. Therefore, we developed a method for identifying and localizing the ocular tissue adjacent to implanted electrodes using a (color-coded) dye marking scheme, and we modified an eye fixation technique to minimize artifactual retinal detachment. This method also rendered the sclera translucent, enabling localization of individual electrodes and specific parts of an implant. Finally, we used a matched control to increase the power of the histopathological assessments. In summary, this method enables reliable and efficient discrimination and assessment of the retinal cytoarchitecture in an implanted eye.
Medicine, Issue 78, Anatomy, Physiology, Biomedical Engineering, Bioengineering, Surgery, Ophthalmology, Pathology, Tissue Engineering, Prosthesis Implantation, Implantable Neurostimulators, Implants, Experimental, Histology, bionics, Retina, Prosthesis, Bionic Eye, Retinal, Implant, Suprachoroidal, Fixation, Localization, Safety, Preclinical, dissection, embedding, staining, tissue, surgical techniques, clinical techniques
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Combining Computer Game-Based Behavioural Experiments With High-Density EEG and Infrared Gaze Tracking
Authors: Keith J. Yoder, Matthew K. Belmonte.
Institutions: Cornell University, University of Chicago, Manesar, India.
Experimental paradigms are valuable insofar as the timing and other parameters of their stimuli are well specified and controlled, and insofar as they yield data relevant to the cognitive processing that occurs under ecologically valid conditions. These two goals often are at odds, since well controlled stimuli often are too repetitive to sustain subjects' motivation. Studies employing electroencephalography (EEG) are often especially sensitive to this dilemma between ecological validity and experimental control: attaining sufficient signal-to-noise in physiological averages demands large numbers of repeated trials within lengthy recording sessions, limiting the subject pool to individuals with the ability and patience to perform a set task over and over again. This constraint severely limits researchers' ability to investigate younger populations as well as clinical populations associated with heightened anxiety or attentional abnormalities. Even adult, non-clinical subjects may not be able to achieve their typical levels of performance or cognitive engagement: an unmotivated subject for whom an experimental task is little more than a chore is not the same, behaviourally, cognitively, or neurally, as a subject who is intrinsically motivated and engaged with the task. A growing body of literature demonstrates that embedding experiments within video games may provide a way between the horns of this dilemma between experimental control and ecological validity. The narrative of a game provides a more realistic context in which tasks occur, enhancing their ecological validity (Chaytor & Schmitter-Edgecombe, 2003). Moreover, this context provides motivation to complete tasks. In our game, subjects perform various missions to collect resources, fend off pirates, intercept communications or facilitate diplomatic relations. In so doing, they also perform an array of cognitive tasks, including a Posner attention-shifting paradigm (Posner, 1980), a go/no-go test of motor inhibition, a psychophysical motion coherence threshold task, the Embedded Figures Test (Witkin, 1950, 1954) and a theory-of-mind (Wimmer & Perner, 1983) task. The game software automatically registers game stimuli and subjects' actions and responses in a log file, and sends event codes to synchronise with physiological data recorders. Thus the game can be combined with physiological measures such as EEG or fMRI, and with moment-to-moment tracking of gaze. Gaze tracking can verify subjects' compliance with behavioural tasks (e.g. fixation) and overt attention to experimental stimuli, and also physiological arousal as reflected in pupil dilation (Bradley et al., 2008). At great enough sampling frequencies, gaze tracking may also help assess covert attention as reflected in microsaccades - eye movements that are too small to foveate a new object, but are as rapid in onset and have the same relationship between angular distance and peak velocity as do saccades that traverse greater distances. The distribution of directions of microsaccades correlates with the (otherwise) covert direction of attention (Hafed & Clark, 2002).
Neuroscience, Issue 46, High-density EEG, ERP, ICA, gaze tracking, computer game, ecological validity
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Play Button
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Authors: Jillian Nguyen, Thomas V. Papathomas, Jay H. Ravaliya, Elizabeth B. Torres.
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Play Button
Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients
Authors: Noa Raz, Michal Hallak, Tamir Ben-Hur, Netta Levin.
Institutions: Hadassah Hebrew-University Medical Center.
In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS.
Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
A Novel RFP Reporter to Aid in the Visualization of the Eye Imaginal Disc in Drosophila
Authors: Aamna K. Kaul, Joseph M. Bateman.
Institutions: King's College London.
The Drosophila eye is a powerful model system for studying areas such as neurogenesis, signal transduction and neurodegeneration. Many of the discoveries made using this system have taken advantage of the spatiotemporal nature of photoreceptor differentiation in the developing eye imaginal disc. To use this system it is first necessary for the researcher to learn to identify and dissect the eye disc. We describe a novel RFP reporter to aid in the identification of the eye disc and the visualization of specific cell types in the developing eye. We detail a methodology for dissection of the eye imaginal disc from third instar larvae and describe how the eye-RFP reporter can aid in this dissection. This eye-RFP reporter is only expressed in the eye and can be visualized using fluorescence microscopy either in live tissue or after fixation without the need for signal amplification. We also show how this reporter can be used to identify specific cells types within the eye disc. This protocol and the use of the eye-RFP reporter will aid researchers using the Drosophila eye to address fundamentally important biological questions.
Cellular Biology, Issue 34, fluorescence microscopy, Drosophila, eye, RFP, dissection, imaginal disc
Play Button
In Vitro Nuclear Assembly Using Fractionated Xenopus Egg Extracts
Authors: Marie Cross, Maureen Powers.
Institutions: Emory University.
Nuclear membrane assembly is an essential step in the cell division cycle; this process can be replicated in the test tube by combining Xenopus sperm chromatin, cytosol, and light membrane fractions. Complete nuclei are formed, including nuclear membranes with pore complexes, and these reconstituted nuclei are capable of normal nuclear processes.
Cellular Biology, Issue 19, Current Protocols Wiley, Xenopus Egg Extracts, Nuclear Assembly, Nuclear Membrane
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
Multifocal Electroretinograms
Authors: Donnell J. Creel.
Institutions: University of Utah.
A limitation of traditional full-field electroretinograms (ERG) for the diagnosis of retinopathy is lack of sensitivity. Generally, ERG results are normal unless more than approximately 20% of the retina is affected. In practical terms, a patient might be legally blind as a result of macular degeneration or other scotomas and still appear normal, according to traditional full field ERG. An important development in ERGs is the multifocal ERG (mfERG). Erich Sutter adapted the mathematical sequences called binary m-sequences enabling the isolation from a single electrical signal an electroretinogram representing less than each square millimeter of retina in response to a visual stimulus1. Results that are generated by mfERG appear similar to those generated by flash ERG. In contrast to flash ERG, which best generates data appropriate for whole-eye disorders. The basic mfERG result is based on the calculated mathematical average of an approximation of the positive deflection component of traditional ERG response, known as the b-wave1. Multifocal ERG programs measure electrical activity from more than a hundred retinal areas per eye, in a few minutes. The enhanced spatial resolution enables scotomas and retinal dysfunction to be mapped and quantified. In the protocol below, we describe the recording of mfERGs using a bipolar speculum contact lens. Components of mfERG systems vary between manufacturers. For the presentation of visible stimulus, some suitable CRT monitors are available but most systems have adopted the use of flat-panel liquid crystal displays (LCD). The visual stimuli depicted here, were produced by a LCD microdisplay subtending 35 - 40 degrees horizontally and 30 - 35 degrees vertically of visual field, and calibrated to produce multifocal flash intensities of 2.7 cd s m-2. Amplification was 50K. Lower and upper bandpass limits were 10 and 300 Hz. The software packages used were VERIS versions 5 and 6.
Medicine, Issue 58, Multifocal electroretinogram, mfERG, electroretinogram, ERG
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.