Stimulation of the human visual cortex produces a transient perception of light, known as a phosphene. Phosphenes are induced by invasive electrical stimulation of the occipital cortex, but also by non-invasive Transcranial Magnetic Stimulation (TMS)1 of the same cortical regions. The intensity at which a phosphene is induced (phosphene threshold) is a well established measure of visual cortical excitability and is used to study cortico-cortical interactions, functional organization 2, susceptibility to pathology 3,4 and visual processing 5-7. Phosphenes are typically defined by three characteristics: they are observed in the visual hemifield contralateral to stimulation; they are induced when the subject s eyes are open or closed, and their spatial location changes with the direction of gaze 2. Various methods have been used to document phosphenes, but a standardized methodology is lacking. We demonstrate a reliable procedure to obtain phosphene threshold values and introduce a novel system for the documentation and analysis of phosphenes. We developed the Laser Tracking and Painting system (LTaP), a low cost, easily built and operated system that records the location and size of perceived phosphenes in real-time. The LTaP system provides a stable and customizable environment for quantification and analysis of phosphenes.
25 Related JoVE Articles!
Proton Transfer and Protein Conformation Dynamics in Photosensitive Proteins by Time-resolved Step-scan Fourier-transform Infrared Spectroscopy
Institutions: Freie Universität Berlin.
Monitoring the dynamics of protonation and protein backbone conformation changes during the function of a protein is an essential step towards understanding its mechanism. Protonation and conformational changes affect the vibration pattern of amino acid side chains and of the peptide bond, respectively, both of which can be probed by infrared (IR) difference spectroscopy. For proteins whose function can be repetitively and reproducibly triggered by light, it is possible to obtain infrared difference spectra with (sub)microsecond resolution over a broad spectral range using the step-scan Fourier transform infrared technique. With ~102
repetitions of the photoreaction, the minimum number to complete a scan at reasonable spectral resolution and bandwidth, the noise level in the absorption difference spectra can be as low as ~10-4
, sufficient to follow the kinetics of protonation changes from a single amino acid. Lower noise levels can be accomplished by more data averaging and/or mathematical processing. The amount of protein required for optimal results is between 5-100 µg, depending on the sampling technique used. Regarding additional requirements, the protein needs to be first concentrated in a low ionic strength buffer and then dried to form a film. The protein film is hydrated prior to the experiment, either with little droplets of water or under controlled atmospheric humidity. The attained hydration level (g of water / g of protein) is gauged from an IR absorption spectrum. To showcase the technique, we studied the photocycle of the light-driven proton-pump bacteriorhodopsin in its native purple membrane environment, and of the light-gated ion channel channelrhodopsin-2 solubilized in detergent.
Biophysics, Issue 88, bacteriorhodopsin, channelrhodopsin, attenuated total reflection, proton transfer, protein dynamics, infrared spectroscopy, time-resolved spectroscopy, step-scan, membrane proteins, singular value decomposition
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90,
zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
In Vitro Reconstitution of Light-harvesting Complexes of Plants and Green Algae
Institutions: VU University Amsterdam.
In plants and green algae, light is captured by the light-harvesting complexes (LHCs), a family of integral membrane proteins that coordinate chlorophylls and carotenoids. In vivo
, these proteins are folded with pigments to form complexes which are inserted in the thylakoid membrane of the chloroplast. The high similarity in the chemical and physical properties of the members of the family, together with the fact that they can easily lose pigments during isolation, makes their purification in a native state challenging. An alternative approach to obtain homogeneous preparations of LHCs was developed by Plumley and Schmidt in 19871
, who showed that it was possible to reconstitute these complexes in vitro
starting from purified pigments and unfolded apoproteins, resulting in complexes with properties very similar to that of native complexes. This opened the way to the use of bacterial expressed recombinant proteins for in vitro
reconstitution. The reconstitution method is powerful for various reasons: (1) pure preparations of individual complexes can be obtained, (2) pigment composition can be controlled to assess their contribution to structure and function, (3) recombinant proteins can be mutated to study the functional role of the individual residues (e.g.,
pigment binding sites) or protein domain (e.g.,
protein-protein interaction, folding). This method has been optimized in several laboratories and applied to most of the light-harvesting complexes. The protocol described here details the method of reconstituting light-harvesting complexes in vitro
currently used in our laboratory,
and examples describing applications of the method are provided.
Biochemistry, Issue 92, Reconstitution, Photosynthesis, Chlorophyll, Carotenoids, Light Harvesting Protein, Chlamydomonas reinhardtii, Arabidopsis thaliana
Measuring Attentional Biases for Threat in Children and Adults
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g.,
snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g.,
flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
A Cognitive Paradigm to Investigate Interference in Working Memory by Distractions and Interruptions
Institutions: University of New Mexico, University of California, San Francisco, University of California, San Francisco, University of California, San Francisco.
Goal-directed behavior is often impaired by interference from the external environment, either in the form of distraction by irrelevant information that one attempts to ignore, or by interrupting information that demands attention as part of another (secondary) task goal. Both forms of external interference have been shown to detrimentally impact the ability to maintain information in working memory (WM). Emerging evidence suggests that these different types of external interference exert different effects on behavior and may be mediated by distinct neural mechanisms. Better characterizing the distinct neuro-behavioral impact of irrelevant distractions versus attended interruptions is essential for advancing an understanding of top-down attention, resolution of external interference, and how these abilities become degraded in healthy aging and in neuropsychiatric conditions. This manuscript describes a novel cognitive paradigm developed the Gazzaley lab that has now been modified into several distinct versions used to elucidate behavioral and neural correlates of interference, by to-be-ignored distractors
versus to-be-attended interruptors
. Details are provided on variants of this paradigm for investigating interference in visual and auditory modalities, at multiple levels of stimulus complexity, and with experimental timing optimized for electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) studies. In addition, data from younger and older adult participants obtained using this paradigm is reviewed and discussed in the context of its relationship with the broader literatures on external interference and age-related neuro-behavioral changes in resolving interference in working memory.
Behavior, Issue 101, Attention, interference, distraction, interruption, working memory, aging, multi-tasking, top-down attention, EEG, fMRI
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding.
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Single-stage Dynamic Reanimation of the Smile in Irreversible Facial Paralysis by Free Functional Muscle Transfer
Institutions: University of Freiburg Medical Centre.
Unilateral facial paralysis is a common disease that is associated with significant functional, aesthetic and psychological issues. Though idiopathic facial paralysis (Bell’s palsy) is the most common diagnosis, patients can also present with a history of physical trauma, infectious disease, tumor, or iatrogenic facial paralysis. Early repair within one year of injury can be achieved by direct nerve repair, cross-face nerve grafting or regional nerve transfer. It is due to muscle atrophy that in long lasting facial paralysis complex reconstructive methods have to be applied. Instead of one single procedure, different surgical approaches have to be considered to alleviate the various components of the paralysis.
The reconstruction of a spontaneous dynamic smile with a symmetric resting tone is a crucial factor to overcome the functional deficits and the social handicap that are associated with facial paralysis. Although numerous surgical techniques have been described, a two-stage approach with an initial cross-facial nerve grafting followed by a free functional muscle transfer is most frequently applied. In selected patients however, a single-stage reconstruction using the motor nerve to the masseter as donor nerve is superior to a two-stage repair. The gracilis muscle is most commonly used for reconstruction, as it presents with a constant anatomy, a simple dissection and minimal donor site morbidity.
Here we demonstrate the pre-operative work-up, the post-operative management, and precisely describe the surgical procedure of single-stage microsurgical reconstruction of the smile by free functional gracilis muscle transfer in a step by step protocol. We further illustrate common pitfalls and provide useful tips which should enable the reader to truly comprehend the procedure. We further discuss indications and limitations of the technique and demonstrate representative results.
Medicine, Issue 97, microsurgery, free microvascular tissue transfer, face, head, head and neck surgery, facial paralysis
Development of a Quantitative Recombinase Polymerase Amplification Assay with an Internal Positive Control
Institutions: Rice University.
It was recently demonstrated that recombinase polymerase amplification (RPA), an isothermal amplification platform for pathogen detection, may be used to quantify DNA sample concentration using a standard curve. In this manuscript, a detailed protocol for developing and implementing a real-time quantitative recombinase polymerase amplification assay (qRPA assay) is provided. Using HIV-1 DNA quantification as an example, the assembly of real-time RPA reactions, the design of an internal positive control (IPC) sequence, and co-amplification of the IPC and target of interest are all described. Instructions and data processing scripts for the construction of a standard curve using data from multiple experiments are provided, which may be used to predict the concentration of unknown samples or assess the performance of the assay. Finally, an alternative method for collecting real-time fluorescence data with a microscope and a stage heater as a step towards developing a point-of-care qRPA assay is described. The protocol and scripts provided may be used for the development of a qRPA assay for any DNA target of interest.
Genetics, Issue 97, recombinase polymerase amplification, isothermal amplification, quantitative, diagnostic, HIV-1, viral load
Testing Sensory and Multisensory Function in Children with Autism Spectrum Disorder
Institutions: Vanderbilt University Medical Center, University of Toronto, Vanderbilt University.
In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e.
, combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system's judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.
Behavior, Issue 98, Temporal processing, multisensory integration, psychophysics, computer based assessments, sensory deficits, autism spectrum disorder
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
2D and 3D Chromosome Painting in Malaria Mosquitoes
Institutions: Virginia Tech.
Fluorescent in situ
hybridization (FISH) of whole arm chromosome probes is a robust technique for mapping genomic regions of interest, detecting chromosomal rearrangements, and studying three-dimensional (3D) organization of chromosomes in the cell nucleus. The advent of laser capture microdissection (LCM) and whole genome amplification (WGA) allows obtaining large quantities of DNA from single cells. The increased sensitivity of WGA kits prompted us to develop chromosome paints and to use them for exploring chromosome organization and evolution in non-model organisms. Here, we present a simple method for isolating and amplifying the euchromatic segments of single polytene chromosome arms from ovarian nurse cells of the African malaria mosquito Anopheles gambiae
. This procedure provides an efficient platform for obtaining chromosome paints, while reducing the overall risk of introducing foreign DNA to the sample. The use of WGA allows for several rounds of re-amplification, resulting in high quantities of DNA that can be utilized for multiple experiments, including 2D and 3D FISH. We demonstrated that the developed chromosome paints can be successfully used to establish the correspondence between euchromatic portions of polytene and mitotic chromosome arms in An. gambiae
. Overall, the union of LCM and single-chromosome WGA provides an efficient tool for creating significant amounts of target DNA for future cytogenetic and genomic studies.
Immunology, Issue 83, Microdissection, whole genome amplification, malaria mosquito, polytene chromosome, mitotic chromosomes, fluorescence in situ hybridization, chromosome painting
Eye Movement Monitoring of Memory
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1
, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3
. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5
. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8
. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1
. Humans can do path integration based exclusively on visual 2-3
, auditory 4
, or inertial cues 5
. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7
. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5
. Movement through physical space therefore does not seem to be accurately represented by the brain.
Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3
for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9
with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed.
16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2
peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen.
Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Eye Tracking Young Children with Autism
Institutions: University of Texas at Dallas, University of North Carolina at Chapel Hill.
The rise of accessible commercial eye-tracking systems has fueled a rapid increase in their use in psychological and psychiatric research. By providing a direct, detailed and objective measure of gaze behavior, eye-tracking has become a valuable tool for examining abnormal perceptual strategies in clinical populations and has been used to identify disorder-specific characteristics1
, promote early identification2
, and inform treatment3
. In particular, investigators of autism spectrum disorders (ASD) have benefited from integrating eye-tracking into their research paradigms4-7
. Eye-tracking has largely been used in these studies to reveal mechanisms underlying impaired task performance8
and abnormal brain functioning9
, particularly during the processing of social information1,10-11
. While older children and adults with ASD comprise the preponderance of research in this area, eye-tracking may be especially useful for studying young children with the disorder as it offers a non-invasive tool for assessing and quantifying early-emerging developmental abnormalities2,12-13
. Implementing eye-tracking with young children with ASD, however, is associated with a number of unique challenges, including issues with compliant behavior resulting from specific task demands and disorder-related psychosocial considerations. In this protocol, we detail methodological considerations for optimizing research design, data acquisition and psychometric analysis while eye-tracking young children with ASD. The provided recommendations are also designed to be more broadly applicable for eye-tracking children with other developmental disabilities. By offering guidelines for best practices in these areas based upon lessons derived from our own work, we hope to help other investigators make sound research design and analysis choices while avoiding common pitfalls that can compromise data acquisition while eye-tracking young children with ASD or other developmental difficulties.
Medicine, Issue 61, eye tracking, autism, neurodevelopmental disorders, toddlers, perception, attention, social cognition
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g.
by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5
. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6
, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1
. Originally published by Naal et al.1
, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here.
Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11
, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2
. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280
= 4,200 L/M/cm)12
. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Like many aquatic animals, zebrafish (Danio rerio
) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc
. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Measurement of Neurophysiological Signals of Ignoring and Attending Processes in Attention Control
Institutions: University of California Los Angeles, Attention Research Institute, University of California Los Angeles.
Attention control is the ability to selectively attend to some sensory signals while ignoring others. This ability is thought to involve two processes: enhancement of sensory signals that are to be attended and the attenuation of sensory signals that are to be ignored. The overall strength of attentional modulation is often measured by comparing the amplitude of a sensory neural response to an external input when attended versus when ignored. This method is robust for detecting attentional modulation, but precludes the ability to assess the separate dynamics of attending and ignoring processes. Here, we describe methodology to measure independently the neurophysiological signals of attending and ignoring using the intermodal attention task (IMAT). This task, when combined with electroencephalography, isolates neurophysiological sensory responses in auditory and visual modalities, when either attending or ignoring, with respect to a passive control. As a result, independent dynamics of attending and of a ignoring can be assessed in either modality. Our results using this task indicate that the timing and cortical sources of attending and ignoring effects differ, as do their contributions to the attention modulation effect, pointing to unique neural trajectories and demonstrating sample utility of measuring them separately.
Behavior, Issue 101, attention, control, executive function, neurophysiology, electroencephalography, event-related potential, attending, ignoring, sustained attention, intermodal, inter-sensory, auditory, visual