JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Diagnostic features of emotional expressions are processed preferentially.
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g., snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g., flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
19 Related JoVE Articles!
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
Play Button
Eye Tracking, Cortisol, and a Sleep vs. Wake Consolidation Delay: Combining Methods to Uncover an Interactive Effect of Sleep and Cortisol on Memory
Authors: Kelly A. Bennion, Katherine R. Mickley Steinmetz, Elizabeth A. Kensinger, Jessica D. Payne.
Institutions: Boston College, Wofford College, University of Notre Dame.
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.
Behavior, Issue 88, attention, consolidation, cortisol, emotion, encoding, glucocorticoids, memory, sleep, stress
Play Button
Eye Tracking Young Children with Autism
Authors: Noah J. Sasson, Jed T. Elison.
Institutions: University of Texas at Dallas, University of North Carolina at Chapel Hill.
The rise of accessible commercial eye-tracking systems has fueled a rapid increase in their use in psychological and psychiatric research. By providing a direct, detailed and objective measure of gaze behavior, eye-tracking has become a valuable tool for examining abnormal perceptual strategies in clinical populations and has been used to identify disorder-specific characteristics1, promote early identification2, and inform treatment3. In particular, investigators of autism spectrum disorders (ASD) have benefited from integrating eye-tracking into their research paradigms4-7. Eye-tracking has largely been used in these studies to reveal mechanisms underlying impaired task performance8 and abnormal brain functioning9, particularly during the processing of social information1,10-11. While older children and adults with ASD comprise the preponderance of research in this area, eye-tracking may be especially useful for studying young children with the disorder as it offers a non-invasive tool for assessing and quantifying early-emerging developmental abnormalities2,12-13. Implementing eye-tracking with young children with ASD, however, is associated with a number of unique challenges, including issues with compliant behavior resulting from specific task demands and disorder-related psychosocial considerations. In this protocol, we detail methodological considerations for optimizing research design, data acquisition and psychometric analysis while eye-tracking young children with ASD. The provided recommendations are also designed to be more broadly applicable for eye-tracking children with other developmental disabilities. By offering guidelines for best practices in these areas based upon lessons derived from our own work, we hope to help other investigators make sound research design and analysis choices while avoiding common pitfalls that can compromise data acquisition while eye-tracking young children with ASD or other developmental difficulties.
Medicine, Issue 61, eye tracking, autism, neurodevelopmental disorders, toddlers, perception, attention, social cognition
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Driving Simulation in the Clinic: Testing Visual Exploratory Behavior in Daily Life Activities in Patients with Visual Field Defects
Authors: Johanna Hamel, Antje Kraft, Sven Ohl, Sophie De Beukelaer, Heinrich J. Audebert, Stephan A. Brandt.
Institutions: Universitätsmedizin Charité, Universitätsmedizin Charité, Humboldt Universität zu Berlin.
Patients suffering from homonymous hemianopia after infarction of the posterior cerebral artery (PCA) report different degrees of constraint in daily life, despite similar visual deficits. We assume this could be due to variable development of compensatory strategies such as altered visual scanning behavior. Scanning compensatory therapy (SCT) is studied as part of the visual training after infarction next to vision restoration therapy. SCT consists of learning to make larger eye movements into the blind field enlarging the visual field of search, which has been proven to be the most useful strategy1, not only in natural search tasks but also in mastering daily life activities2. Nevertheless, in clinical routine it is difficult to identify individual levels and training effects of compensatory behavior, since it requires measurement of eye movements in a head unrestrained condition. Studies demonstrated that unrestrained head movements alter the visual exploratory behavior compared to a head-restrained laboratory condition3. Martin et al.4 and Hayhoe et al.5 showed that behavior demonstrated in a laboratory setting cannot be assigned easily to a natural condition. Hence, our goal was to develop a study set-up which uncovers different compensatory oculomotor strategies quickly in a realistic testing situation: Patients are tested in the clinical environment in a driving simulator. SILAB software (Wuerzburg Institute for Traffic Sciences GmbH (WIVW)) was used to program driving scenarios of varying complexity and recording the driver's performance. The software was combined with a head mounted infrared video pupil tracker, recording head- and eye-movements (EyeSeeCam, University of Munich Hospital, Clinical Neurosciences). The positioning of the patient in the driving simulator and the positioning, adjustment and calibration of the camera is demonstrated. Typical performances of a patient with and without compensatory strategy and a healthy control are illustrated in this pilot study. Different oculomotor behaviors (frequency and amplitude of eye- and head-movements) are evaluated very quickly during the drive itself by dynamic overlay pictures indicating where the subjects gaze is located on the screen, and by analyzing the data. Compensatory gaze behavior in a patient leads to a driving performance comparable to a healthy control, while the performance of a patient without compensatory behavior is significantly worse. The data of eye- and head-movement-behavior as well as driving performance are discussed with respect to different oculomotor strategies and in a broader context with respect to possible training effects throughout the testing session and implications on rehabilitation potential.
Medicine, Issue 67, Neuroscience, Physiology, Anatomy, Ophthalmology, compensatory oculomotor behavior, driving simulation, eye movements, homonymous hemianopia, stroke, visual field defects, visual field enlargement
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Isolation, Characterization and Comparative Differentiation of Human Dental Pulp Stem Cells Derived from Permanent Teeth by Using Two Different Methods
Authors: Razieh Karamzadeh, Mohamadreza Baghaban Eslaminejad, Reza Aflatoonian.
Institutions: Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran.
Developing wisdom teeth are easy-accessible source of stem cells during the adulthood which could be obtained by routine orthodontic treatments. Human pulp-derived stem cells (hDPSCs) possess high proliferation potential with multi-lineage differentiation capacity compare to the ordinary source of adult stem cells1-8; therefore, hDPSCs could be the good candidates for autologous transplantation in tissue engineering and regenerative medicine. Along with these benefits, possessing the mesenchymal stem cells (MSC) features, such as immunolodulatory effect, make hDPSCs more valuable, even in the case of allograft transplantation6,9,10. Therefore, the primary step for using this source of stem cells is to select the best protocol for isolating hDPSCs from pulp tissue. In order to achieve this goal, it is crucial to investigate the effect of various isolation conditions on different cellular behaviors, such as their common surface markers & also their differentiation capacity. Thus, here we separate human pulp tissue from impacted third molar teeth, and then used both existing protocols based on literature, for isolating hDPSCs,11-13 i.e. enzymatic dissociation of pulp tissue (DPSC-ED) or outgrowth from tissue explants (DPSC-OG). In this regards, we tried to facilitate the isolation methods by using dental diamond disk. Then, these cells characterized in terms of stromal-associated Markers (CD73, CD90, CD105 & CD44), hematopoietic/endothelial Markers (CD34, CD45 & CD11b), perivascular marker, like CD146 and also STRO-1. Afterwards, these two protocols were compared based on the differentiation potency into odontoblasts by both quantitative polymerase chain reaction (QPCR) & Alizarin Red Staining. QPCR were used for the assessment of the expression of the mineralization-related genes (alkaline phosphatase; ALP, matrix extracellular phosphoglycoprotein; MEPE & dentin sialophosphoprotein; DSPP).14
Stem Cell Biology, Issue 69, Medicine, Developmental Biology, Cellular Biology, Bioengineering, Dental pulp tissue, Human third molar, Human dental pulp stem cells, hDPSC, Odontoblasts, Outgrown stem cells, MSC, differentiation
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Play Button
Experimental Measurement of Settling Velocity of Spherical Particles in Unconfined and Confined Surfactant-based Shear Thinning Viscoelastic Fluids
Authors: Sahil Malhotra, Mukul M. Sharma.
Institutions: The University of Texas at Austin.
An experimental study is performed to measure the terminal settling velocities of spherical particles in surfactant based shear thinning viscoelastic (VES) fluids. The measurements are made for particles settling in unbounded fluids and fluids between parallel walls. VES fluids over a wide range of rheological properties are prepared and rheologically characterized. The rheological characterization involves steady shear-viscosity and dynamic oscillatory-shear measurements to quantify the viscous and elastic properties respectively. The settling velocities under unbounded conditions are measured in beakers having diameters at least 25x the diameter of particles. For measuring settling velocities between parallel walls, two experimental cells with different wall spacing are constructed. Spherical particles of varying sizes are gently dropped in the fluids and allowed to settle. The process is recorded with a high resolution video camera and the trajectory of the particle is recorded using image analysis software. Terminal settling velocities are calculated from the data. The impact of elasticity on settling velocity in unbounded fluids is quantified by comparing the experimental settling velocity to the settling velocity calculated by the inelastic drag predictions of Renaud et al.1 Results show that elasticity of fluids can increase or decrease the settling velocity. The magnitude of reduction/increase is a function of the rheological properties of the fluids and properties of particles. Confining walls are observed to cause a retardation effect on settling and the retardation is measured in terms of wall factors.
Physics, Issue 83, chemical engineering, settling velocity, Reynolds number, shear thinning, wall retardation
Play Button
Combining Computer Game-Based Behavioural Experiments With High-Density EEG and Infrared Gaze Tracking
Authors: Keith J. Yoder, Matthew K. Belmonte.
Institutions: Cornell University, University of Chicago, Manesar, India.
Experimental paradigms are valuable insofar as the timing and other parameters of their stimuli are well specified and controlled, and insofar as they yield data relevant to the cognitive processing that occurs under ecologically valid conditions. These two goals often are at odds, since well controlled stimuli often are too repetitive to sustain subjects' motivation. Studies employing electroencephalography (EEG) are often especially sensitive to this dilemma between ecological validity and experimental control: attaining sufficient signal-to-noise in physiological averages demands large numbers of repeated trials within lengthy recording sessions, limiting the subject pool to individuals with the ability and patience to perform a set task over and over again. This constraint severely limits researchers' ability to investigate younger populations as well as clinical populations associated with heightened anxiety or attentional abnormalities. Even adult, non-clinical subjects may not be able to achieve their typical levels of performance or cognitive engagement: an unmotivated subject for whom an experimental task is little more than a chore is not the same, behaviourally, cognitively, or neurally, as a subject who is intrinsically motivated and engaged with the task. A growing body of literature demonstrates that embedding experiments within video games may provide a way between the horns of this dilemma between experimental control and ecological validity. The narrative of a game provides a more realistic context in which tasks occur, enhancing their ecological validity (Chaytor & Schmitter-Edgecombe, 2003). Moreover, this context provides motivation to complete tasks. In our game, subjects perform various missions to collect resources, fend off pirates, intercept communications or facilitate diplomatic relations. In so doing, they also perform an array of cognitive tasks, including a Posner attention-shifting paradigm (Posner, 1980), a go/no-go test of motor inhibition, a psychophysical motion coherence threshold task, the Embedded Figures Test (Witkin, 1950, 1954) and a theory-of-mind (Wimmer & Perner, 1983) task. The game software automatically registers game stimuli and subjects' actions and responses in a log file, and sends event codes to synchronise with physiological data recorders. Thus the game can be combined with physiological measures such as EEG or fMRI, and with moment-to-moment tracking of gaze. Gaze tracking can verify subjects' compliance with behavioural tasks (e.g. fixation) and overt attention to experimental stimuli, and also physiological arousal as reflected in pupil dilation (Bradley et al., 2008). At great enough sampling frequencies, gaze tracking may also help assess covert attention as reflected in microsaccades - eye movements that are too small to foveate a new object, but are as rapid in onset and have the same relationship between angular distance and peak velocity as do saccades that traverse greater distances. The distribution of directions of microsaccades correlates with the (otherwise) covert direction of attention (Hafed & Clark, 2002).
Neuroscience, Issue 46, High-density EEG, ERP, ICA, gaze tracking, computer game, ecological validity
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
How to Detect Amygdala Activity with Magnetoencephalography using Source Imaging
Authors: Nicholas L. Balderston, Douglas H. Schultz, Sylvain Baillet, Fred J. Helmstetter.
Institutions: University of Wisconsin-Milwaukee, Montreal Neurological Institute, McGill University, Medical College of Wisconsin .
In trace fear conditioning a conditional stimulus (CS) predicts the occurrence of the unconditional stimulus (UCS), which is presented after a brief stimulus free period (trace interval)1. Because the CS and UCS do not co-occur temporally, the subject must maintain a representation of that CS during the trace interval. In humans, this type of learning requires awareness of the stimulus contingencies in order to bridge the trace interval2-4. However when a face is used as a CS, subjects can implicitly learn to fear the face even in the absence of explicit awareness*. This suggests that there may be additional neural mechanisms capable of maintaining certain types of "biologically-relevant" stimuli during a brief trace interval. Given that the amygdala is involved in trace conditioning, and is sensitive to faces, it is possible that this structure can maintain a representation of a face CS during a brief trace interval. It is challenging to understand how the brain can associate an unperceived face with an aversive outcome, even though the two stimuli are separated in time. Furthermore investigations of this phenomenon are made difficult by two specific challenges. First, it is difficult to manipulate the subject's awareness of the visual stimuli. One common way to manipulate visual awareness is to use backward masking. In backward masking, a target stimulus is briefly presented (< 30 msec) and immediately followed by a presentation of an overlapping masking stimulus5. The presentation of the mask renders the target invisible6-8. Second, masking requires very rapid and precise timing making it difficult to investigate neural responses evoked by masked stimuli using many common approaches. Blood-oxygenation level dependent (BOLD) responses resolve at a timescale too slow for this type of methodology, and real time recording techniques like electroencephalography (EEG) and magnetoencephalography (MEG) have difficulties recovering signal from deep sources. However, there have been recent advances in the methods used to localize the neural sources of the MEG signal9-11. By collecting high-resolution MRI images of the subject's brain, it is possible to create a source model based on individual neural anatomy. Using this model to "image" the sources of the MEG signal, it is possible to recover signal from deep subcortical structures, like the amygdala and the hippocampus*.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Medicine, Physiology, Anatomy, Psychology, Amygdala, Magnetoencephalography, Fear, awareness, masking, source imaging, conditional stimulus, unconditional stimulus, hippocampus, brain, magnetic resonance imaging, MRI, fMRI, imaging, clinical techniques
Play Button
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Authors: Daniel E. Bradford, Katherine P. Magruder, Rachel A. Korhumel, John J. Curtin.
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Play Button
Brain Imaging Investigation of the Neural Correlates of Emotion Regulation
Authors: Sanda Dolcos, Keen Sung, Ekaterina Denkova, Roger A. Dixon, Florin Dolcos.
Institutions: University of Illinois, Urbana-Champaign, University of Alberta, Edmonton, University of Alberta, Edmonton, University of Alberta, Edmonton, University of Alberta, Edmonton, University of Illinois, Urbana-Champaign, University of Illinois, Urbana-Champaign.
The ability to control/regulate emotions is an important coping mechanism in the face of emotionally stressful situations. Although significant progress has been made in understanding conscious/deliberate emotion regulation (ER), less is known about non-conscious/automatic ER and the associated neural correlates. This is in part due to the problems inherent in the unitary concepts of automatic and conscious processing1. Here, we present a protocol that allows investigation of the neural correlates of both deliberate and automatic ER using functional magnetic resonance imaging (fMRI). This protocol allows new avenues of inquiry into various aspects of ER. For instance, the experimental design allows manipulation of the goal to regulate emotion (conscious vs. non-conscious), as well as the intensity of the emotional challenge (high vs. low). Moreover, it allows investigation of both immediate (emotion perception) and long-term effects (emotional memory) of ER strategies on emotion processing. Therefore, this protocol may contribute to better understanding of the neural mechanisms of emotion regulation in healthy behaviour, and to gaining insight into possible causes of deficits in depression and anxiety disorders in which emotion dysregulation is often among the core debilitating features.
Neuroscience, Issue 54, Emotion Suppression, Automatic Emotion Control, Deliberate Emotion Control, Goal Induction, Neuroimaging
Play Button
Brain Imaging Investigation of the Impairing Effect of Emotion on Cognition
Authors: Gloria Wong, Sanda Dolcos, Ekaterina Denkova, Rajendra Morey, Lihong Wang, Gregory McCarthy, Florin Dolcos.
Institutions: University of Alberta, University of Alberta, University of Illinois, Duke University , Duke University , VA Medical Center, Yale University, University of Illinois, University of Illinois.
Emotions can impact cognition by exerting both enhancing (e.g., better memory for emotional events) and impairing (e.g., increased emotional distractibility) effects (reviewed in 1). Complementing our recent protocol 2 describing a method that allows investigation of the neural correlates of the memory-enhancing effect of emotion (see also 1, 3-5), here we present a protocol that allows investigation of the neural correlates of the detrimental impact of emotion on cognition. The main feature of this method is that it allows identification of reciprocal modulations between activity in a ventral neural system, involved in 'hot' emotion processing (HotEmo system), and a dorsal system, involved in higher-level 'cold' cognitive/executive processing (ColdEx system), which are linked to cognitive performance and to individual variations in behavior (reviewed in 1). Since its initial introduction 6, this design has proven particularly versatile and influential in the elucidation of various aspects concerning the neural correlates of the detrimental impact of emotional distraction on cognition, with a focus on working memory (WM), and of coping with such distraction 7,11, in both healthy 8-11 and clinical participants 12-14.
Neuroscience, Issue 60, Emotion-Cognition Interaction, Cognitive/Emotional Interference, Task-Irrelevant Distraction, Neuroimaging, fMRI, MRI
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.