The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
24 Related JoVE Articles!
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1
. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2
. Reliably identifying these regions was the focus of our work.
Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5
to more rigorous statistical models, e.g.
Hidden Markov Models (HMMs)6-8
. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized.
Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types.
To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9
, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11
and epigenetic data12
to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Making Sense of Listening: The IMAP Test Battery
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated.
When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli.
We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
High-throughput, Automated Extraction of DNA and RNA from Clinical Samples using TruTip Technology on Common Liquid Handling Robots
Institutions: Akonni Biosystems, Inc., Akonni Biosystems, Inc., Akonni Biosystems, Inc., Akonni Biosystems, Inc..
TruTip is a simple nucleic acid extraction technology whereby a porous, monolithic binding matrix is inserted into a pipette tip. The geometry of the monolith can be adapted for specific pipette tips ranging in volume from 1.0 to 5.0 ml. The large porosity of the monolith enables viscous or complex samples to readily pass through it with minimal fluidic backpressure. Bi-directional flow maximizes residence time between the monolith and sample, and enables large sample volumes to be processed within a single TruTip. The fundamental steps, irrespective of sample volume or TruTip geometry, include cell lysis, nucleic acid binding to the inner pores of the TruTip monolith, washing away unbound sample components and lysis buffers, and eluting purified and concentrated nucleic acids into an appropriate buffer. The attributes and adaptability of TruTip are demonstrated in three automated clinical sample processing protocols using an Eppendorf epMotion 5070, Hamilton STAR and STARplus liquid handling robots, including RNA isolation from nasopharyngeal aspirate, genomic DNA isolation from whole blood, and fetal DNA extraction and enrichment from large volumes of maternal plasma (respectively).
Genetics, Issue 76, Bioengineering, Biomedical Engineering, Molecular Biology, Automation, Laboratory, Clinical Laboratory Techniques, Molecular Diagnostic Techniques, Analytic Sample Preparation Methods, Clinical Laboratory Techniques, Molecular Diagnostic Techniques, Genetic Techniques, Molecular Diagnostic Techniques, Automation, Laboratory, Chemistry, Clinical, DNA/RNA extraction, automation, nucleic acid isolation, sample preparation, nasopharyngeal aspirate, blood, plasma, high-throughput, sequencing
Automated Analysis of Dynamic Ca2+ Signals in Image Sequences
Institutions: University of South Alabama, University of South Alabama.
signals are commonly studied with fluorescent Ca2+
indicator dyes and microscopy techniques. However, quantitative analysis of Ca2+
imaging data is time consuming and subject to bias. Automated signal analysis algorithms based on region of interest (ROI) detection have been implemented for one-dimensional line scan measurements, but there is no current algorithm which integrates optimized identification and analysis of ROIs in two-dimensional image sequences. Here an algorithm for rapid acquisition and analysis of ROIs in image sequences is described. It utilizes ellipses fit to noise filtered signals in order to determine optimal ROI placement, and computes Ca2+
signal parameters of amplitude, duration and spatial spread. This algorithm was implemented as a freely available plugin for ImageJ (NIH) software. Together with analysis scripts written for the open source statistical processing software R, this approach provides a high-capacity pipeline for performing quick statistical analysis of experimental output. The authors suggest that use of this analysis protocol will lead to a more complete and unbiased characterization of physiologic Ca2+
Basic Protocol, Issue 88, signaling, ImageJ, detection, microscopy, algorithm, calcium
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Extinction Training During the Reconsolidation Window Prevents Recovery of Fear
Institutions: Mt. Sinai School of Medicine, New York University , New York University .
Fear is maladaptive when it persists long after circumstances have become safe. It is therefore crucial to develop an approach that persistently prevents the return of fear. Pavlovian fear-conditioning paradigms are commonly employed to create a controlled, novel fear association in the laboratory. After pairing an innocuous stimulus (conditioned stimulus, CS) with an aversive outcome (unconditioned stimulus, US) we can elicit a fear response (conditioned response, or CR) by presenting just the stimulus alone1,2
. Once fear is acquired, it can be diminished using extinction training, whereby the conditioned stimulus is repeatedly presented without the aversive outcome until fear is no longer expressed3
. This inhibitory learning creates a new, safe representation for the CS, which competes for expression with the original fear memory4
. Although extinction is effective at inhibiting fear, it is not permanent. Fear can spontaneously recover with the passage of time. Exposure to stress or returning to the context of initial learning can also cause fear to resurface3,4
Our protocol addresses the transient nature of extinction by targeting the reconsolidation window to modify emotional memory in a more permanent manner. Ample evidence suggests that reactivating a consolidated memory returns it to a labile state, during which the memory is again susceptible to interference5-9
. This window of opportunity appears to open shortly after reactivation and close approximately 6hrs later5,11,16
, although this may vary depending on the strength and age of the memory15
. By allowing new information to incorporate into the original memory trace, this memory may be updated as it reconsolidates10,11
. Studies involving non-human animals have successfully blocked the expression of fear memory by introducing pharmacological manipulations within the reconsolidation window, however, most agents used are either toxic to humans or show equivocal effects when used in human studies12-14
. Our protocol addresses these challenges by offering an effective, yet non-invasive, behavioral manipulation that is safe for humans.
By prompting fear memory retrieval prior to extinction, we essentially trigger the reconsolidation process, allowing new safety information (i.e.
, extinction) to be incorporated while the fear memory is still susceptible to interference. A recent study employing this behavioral manipulation in rats has successfully blocked fear memory using these temporal parameters11
. Additional studies in humans have demonstrated that introducing new information after the retrieval of previously consolidated motor16
, or declarative18
memories leads to interference with the original memory trace14
. We outline below a novel protocol used to block fear recovery in humans.
Neuroscience, Issue 66, Medicine, Psychology, Physiology, Fear conditioning, extinction, reconsolidation, emotional memory, spontaneous recovery, skin conductance response
Visualizing Bacteria in Nematodes using Fluorescent Microscopy
Institutions: University of Wisconsin-Madison.
Symbioses, the living together of two or more organisms, are widespread throughout all kingdoms of life. As two of the most ubiquitous organisms on earth, nematodes and bacteria form a wide array of symbiotic associations that range from beneficial to pathogenic 1-3
. One such association is the mutually beneficial relationship between Xenorhabdus
bacteria and Steinernema
nematodes, which has emerged as a model system of symbiosis 4
nematodes are entomopathogenic, using their bacterial symbiont to kill insects 5
. For transmission between insect hosts, the bacteria colonize the intestine of the nematode's infective juvenile stage 6-8
. Recently, several other nematode species have been shown to utilize bacteria to kill insects 9-13
, and investigations have begun examining the interactions between the nematodes and bacteria in these systems 9
We describe a method for visualization of a bacterial symbiont within or on a nematode host, taking advantage of the optical transparency of nematodes when viewed by microscopy. The bacteria are engineered to express a fluorescent protein, allowing their visualization by fluorescence microscopy. Many plasmids are available that carry genes encoding proteins that fluoresce at different wavelengths (i.e.
green or red), and conjugation of plasmids from a donor Escherichia coli
strain into a recipient bacterial symbiont is successful for a broad range of bacteria. The methods described were developed to investigate the association between Steinernema carpocapsae
and Xenorhabdus nematophila 14
. Similar methods have been used to investigate other nematode-bacterium associations 9,15-18
and the approach therefore is generally applicable.
The method allows characterization of bacterial presence and localization within nematodes at different stages of development, providing insights into the nature of the association and the process of colonization 14,16,19
. Microscopic analysis reveals both colonization frequency within a population and localization of bacteria to host tissues 14,16,19-21
. This is an advantage over other methods of monitoring bacteria within nematode populations, such as sonication 22
or grinding 23
, which can provide average levels of colonization, but may not, for example, discriminate populations with a high frequency of low symbiont loads from populations with a low frequency of high symbiont loads. Discriminating the frequency and load of colonizing bacteria can be especially important when screening or characterizing bacterial mutants for colonization phenotypes 21,24
. Indeed, fluorescence microscopy has been used in high throughput screening of bacterial mutants for defects in colonization 17,18
, and is less laborious than other methods, including sonication 22,25-27
and individual nematode dissection 28,29
Microbiology, Issue 68, Molecular Biology, Bacteriology, Developmental Biology, Colonization, Xenorhabdus, Steinernema, symbiosis, nematode, bacteria, fluorescence microscopy
Fruit Volatile Analysis Using an Electronic Nose
Institutions: University of California, Davis, University of California, Davis, University of California, Davis.
Numerous and diverse physiological changes occur during fruit ripening, including the development of a specific volatile blend that characterizes fruit aroma. Maturity at harvest is one of the key factors influencing the flavor quality of fruits and vegetables1
. The validation of robust methods that rapidly assess fruit maturity and aroma quality would allow improved management of advanced breeding programs, production practices and postharvest handling.
Over the last three decades, much research has been conducted to develop so-called electronic noses, which are devices able to rapidly detect odors and flavors2-4
. Currently there are several commercially available electronic noses able to perform volatile analysis, based on different technologies. The electronic nose used in our work (zNose, EST, Newbury Park, CA, USA), consists of ultra-fast gas chromatography coupled with a surface acoustic wave sensor (UFGC-SAW). This technology has already been tested for its ability to monitor quality of various commodities, including detection of deterioration in apple5
; ripeness and rot evaluation in mango6
; aroma profiling of thymus
volatile compounds in grape berries8
; characterization of vegetable oil9
and detection of adulterants in virgin coconut oil10
This system can perform the three major steps of aroma analysis: headspace sampling, separation of volatile compounds, and detection. In about one minute, the output, a chromatogram, is produced and, after a purging cycle, the instrument is ready for further analysis. The results obtained with the zNose can be compared to those of other gas-chromatographic systems by calculation of Kovats Indices (KI). Once the instrument has been tuned with an alkane standard solution, the retention times are automatically converted into KIs. However, slight changes in temperature and flow rate are expected to occur over time, causing retention times to drift. Also, depending on the polarity of the column stationary phase, the reproducibility of KI calculations can vary by several index units11
. A series of programs and graphical interfaces were therefore developed to compare calculated KIs among samples in a semi-automated fashion. These programs reduce the time required for chromatogram analysis of large data sets and minimize the potential for misinterpretation of the data when chromatograms are not perfectly aligned.
We present a method for rapid volatile compound analysis in fruit. Sample preparation, data acquisition and handling procedures are also discussed.
Plant Biology, Issue 61, zNose, volatile profiling, aroma, Kovats Index, electronic nose, gas chromatography, retention time shift
Optical Scatter Microscopy Based on Two-Dimensional Gabor Filters
Institutions: Rutgers University .
We demonstrate a microscopic instrument that can measure subcellular texture arising from organelle morphology and organization within unstained living cells. The proposed instrument extends the sensitivity of label-free optical microscopy to nanoscale changes in organelle size and shape and can be used to accelerate the study of the structure-function relationship pertaining to organelle dynamics underlying fundamental biological processes, such as programmed cell death or cellular differentiation. The microscope can be easily implemented on existing microscopy platforms, and can therefore be disseminated to individual laboratories, where scientists can implement and use the proposed methods with unrestricted access.
The proposed technique is able to characterize subcellular structure by observing the cell through two-dimensional optical Gabor filters. These filters can be tuned to sense with nanoscale (10's of nm) sensitivity, specific morphological attributes pertaining to the size and orientation of non-spherical subcellular organelles. While based on contrast generated by elastic scattering, the technique does not rely on a detailed inverse scattering model or on Mie theory to extract morphometric measurements. This technique is therefore applicable to non-spherical organelles for which a precise theoretical scatter description is not easily given, and provides distinctive morphometric parameters that can be obtained within unstained living cells to assess their function. The technique is advantageous compared with digital image processing in that it operates directly on the object's field transform rather than the discretized object's intensity. It does not rely on high image sampling rates and can therefore be used to rapidly screen morphological activity within hundreds of cells at a time, thus greatly facilitating the study of organelle structure beyond individual organelle segmentation and reconstruction by fluorescence confocal microscopy of highly magnified digital images of limited fields of view.
In this demonstration we show data from a marine diatom to illustrate the methodology. We also show preliminary data collected from living cells to give an idea of how the method may be applied in a relevant biological context.
Cellular Biology, Issue 40, Cell analysis, Optical Fourier processing, Light scattering, Microscopy
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1
Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23
However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25
Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g.
or include a significant motor component (e.g.
, pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16
We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22
This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26
This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g.
, laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24
Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Computer-assisted Large-scale Visualization and Quantification of Pancreatic Islet Mass, Size Distribution and Architecture
Institutions: University of Chicago, National Institutes of Health, University of Chicago, University of Massachusetts.
The pancreatic islet is a unique micro-organ composed of several hormone secreting endocrine cells such as beta-cells (insulin), alpha-cells (glucagon), and delta-cells (somatostatin) that are embedded in the exocrine tissues and comprise 1-2% of the entire pancreas. There is a close correlation between body and pancreas weight. Total beta-cell mass also increases proportionately to compensate for the demand for insulin in the body. What escapes this proportionate expansion is the size distribution of islets. Large animals such as humans share similar islet size distributions with mice, suggesting that this micro-organ has a certain size limit to be functional. The inability of large animal pancreata to generate proportionately larger islets is compensated for by an increase in the number of islets and by an increase in the proportion of larger islets in their overall islet size distribution. Furthermore, islets exhibit a striking plasticity in cellular composition and architecture among different species and also within the same species under various pathophysiological conditions. In the present study, we describe novel approaches for the analysis of biological image data in order to facilitate the automation of analytic processes, which allow for the analysis of large and heterogeneous data collections in the study of such dynamic biological processes and complex structures. Such studies have been hampered due to technical difficulties of unbiased sampling and generating large-scale data sets to precisely capture the complexity of biological processes of islet biology. Here we show methods to collect unbiased "representative" data within the limited availability of samples (or to minimize the sample collection) and the standard experimental settings, and to precisely analyze the complex three-dimensional structure of the islet. Computer-assisted automation allows for the collection and analysis of large-scale data sets and also assures unbiased interpretation of the data. Furthermore, the precise quantification of islet size distribution and spatial coordinates (i.e. X, Y, Z-positions) not only leads to an accurate visualization of pancreatic islet structure and composition, but also allows us to identify patterns during development and adaptation to altering conditions through mathematical modeling. The methods developed in this study are applicable to studies of many other systems and organisms as well.
Cellular Biology, Issue 49, beta-cells, islets, large-scale analysis, pancreas
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Magnetic Tweezers for the Measurement of Twist and Torque
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6
. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7
provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6
, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology.
A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy
Analyzing and Building Nucleic Acid Structures with 3DNA
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at http://w3dna.rutgers.edu, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
VisualEyes: A Modular Software System for Oculomotor Experimentation
Institutions: New Jersey Institute of Technology.
Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.1
However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.
Neuroscience, Issue 49, Eye Movement Recording, Neuroscience, Visual Stimulation, Saccade, Vergence, Smooth Pursuit, Central Vision, Attention, Heterophoria