JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Inaccuracy, uncertainty and the space-time permutation scan statistic.
PLoS ONE
PUBLISHED: 02-07-2013
The space-time permutation scan statistic (STPSS) is designed to identify hot (and cool) spots of space-time interaction within patterns of spatio-temporal events. While the method has been adopted widely in practice, there has been little consideration of the effect inaccurate and/or incomplete input data may have on its results. Given the pervasiveness of inaccuracy, uncertainty and incompleteness within spatio-temporal datasets and the popularity of the method, this issue warrants further investigation. Here, a series of simulation experiments using both synthetic and real-world data are carried out to better understand how deficiencies in the spatial and temporal accuracy as well as the completeness of the input data may affect results of the STPSS. The findings, while specific to the parameters employed here, reveal a surprising robustness of the methods results in the face of these deficiencies. As expected, the experiments illustrate that greater degradation of input data quality leads to greater variability in the results. Additionally, they show that weaker signals of space-time interaction are those most affected by the introduced deficiencies. However, in stark contrast to previous investigations into the impact of these input data problems on global tests of space-time interaction, this local metric is revealed to be only minimally affected by the degree of inaccuracy and incompleteness introduced in these experiments.
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Published: 11-06-2014
ABSTRACT
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
22 Related JoVE Articles!
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
Network Analysis of the Default Mode Network Using Functional Connectivity MRI in Temporal Lobe Epilepsy
Authors: Zulfi Haneef, Agatha Lenartowicz, Hsiang J. Yeh, Jerome Engel Jr., John M. Stern.
Institutions: Baylor College of Medicine, Michael E. DeBakey VA Medical Center, University of California, Los Angeles, University of California, Los Angeles.
Functional connectivity MRI (fcMRI) is an fMRI method that examines the connectivity of different brain areas based on the correlation of BOLD signal fluctuations over time. Temporal Lobe Epilepsy (TLE) is the most common type of adult epilepsy and involves multiple brain networks. The default mode network (DMN) is involved in conscious, resting state cognition and is thought to be affected in TLE where seizures cause impairment of consciousness. The DMN in epilepsy was examined using seed based fcMRI. The anterior and posterior hubs of the DMN were used as seeds in this analysis. The results show a disconnection between the anterior and posterior hubs of the DMN in TLE during the basal state. In addition, increased DMN connectivity to other brain regions in left TLE along with decreased connectivity in right TLE is revealed. The analysis demonstrates how seed-based fcMRI can be used to probe cerebral networks in brain disorders such as TLE.
Medicine, Issue 90, Default Mode Network (DMN), Temporal Lobe Epilepsy (TLE), fMRI, MRI, functional connectivity MRI (fcMRI), blood oxygenation level dependent (BOLD)
51442
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Clinical Assessment of Spatiotemporal Gait Parameters in Patients and Older Adults
Authors: Julia F. Item-Glatthorn, Nicola A. Maffiuletti.
Institutions: Schulthess Clinic.
Spatial and temporal characteristics of human walking are frequently evaluated to identify possible gait impairments, mainly in orthopedic and neurological patients1-4, but also in healthy older adults5,6. The quantitative gait analysis described in this protocol is performed with a recently-introduced photoelectric system (see Materials table) which has the potential to be used in the clinic because it is portable, easy to set up (no subject preparation is required before a test), and does not require maintenance and sensor calibration. The photoelectric system consists of series of high-density floor-based photoelectric cells with light-emitting and light-receiving diodes that are placed parallel to each other to create a corridor, and are oriented perpendicular to the line of progression7. The system simply detects interruptions in light signal, for instance due to the presence of feet within the recording area. Temporal gait parameters and 1D spatial coordinates of consecutive steps are subsequently calculated to provide common gait parameters such as step length, single limb support and walking velocity8, whose validity against a criterion instrument has recently been demonstrated7,9. The measurement procedures are very straightforward; a single patient can be tested in less than 5 min and a comprehensive report can be generated in less than 1 min.
Medicine, Issue 93, gait analysis, walking, floor-based photocells, spatiotemporal, elderly, orthopedic patients, neurological patients
51878
Play Button
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Authors: Daniel E. Bradford, Katherine P. Magruder, Rachel A. Korhumel, John J. Curtin.
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
51905
Play Button
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Authors: Carmine Di Rienzo, Enrico Gratton, Fabio Beltram, Francesco Cardarelli.
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
51994
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
50131
Play Button
Fabrication And Characterization Of Photonic Crystal Slow Light Waveguides And Cavities
Authors: Christopher Paul Reardon, Isabella H. Rey, Karl Welna, Liam O'Faolain, Thomas F. Krauss.
Institutions: University of St Andrews.
Slow light has been one of the hot topics in the photonics community in the past decade, generating great interest both from a fundamental point of view and for its considerable potential for practical applications. Slow light photonic crystal waveguides, in particular, have played a major part and have been successfully employed for delaying optical signals1-4 and the enhancement of both linear5-7 and nonlinear devices.8-11 Photonic crystal cavities achieve similar effects to that of slow light waveguides, but over a reduced band-width. These cavities offer high Q-factor/volume ratio, for the realization of optically12 and electrically13 pumped ultra-low threshold lasers and the enhancement of nonlinear effects.14-16 Furthermore, passive filters17 and modulators18-19 have been demonstrated, exhibiting ultra-narrow line-width, high free-spectral range and record values of low energy consumption. To attain these exciting results, a robust repeatable fabrication protocol must be developed. In this paper we take an in-depth look at our fabrication protocol which employs electron-beam lithography for the definition of photonic crystal patterns and uses wet and dry etching techniques. Our optimised fabrication recipe results in photonic crystals that do not suffer from vertical asymmetry and exhibit very good edge-wall roughness. We discuss the results of varying the etching parameters and the detrimental effects that they can have on a device, leading to a diagnostic route that can be taken to identify and eliminate similar issues. The key to evaluating slow light waveguides is the passive characterization of transmission and group index spectra. Various methods have been reported, most notably resolving the Fabry-Perot fringes of the transmission spectrum20-21 and interferometric techniques.22-25 Here, we describe a direct, broadband measurement technique combining spectral interferometry with Fourier transform analysis.26 Our method stands out for its simplicity and power, as we can characterise a bare photonic crystal with access waveguides, without need for on-chip interference components, and the setup only consists of a Mach-Zehnder interferometer, with no need for moving parts and delay scans. When characterising photonic crystal cavities, techniques involving internal sources21 or external waveguides directly coupled to the cavity27 impact on the performance of the cavity itself, thereby distorting the measurement. Here, we describe a novel and non-intrusive technique that makes use of a cross-polarised probe beam and is known as resonant scattering (RS), where the probe is coupled out-of plane into the cavity through an objective. The technique was first demonstrated by McCutcheon et al.28 and further developed by Galli et al.29
Physics, Issue 69, Optics and Photonics, Astronomy, light scattering, light transmission, optical waveguides, photonics, photonic crystals, Slow-light, Cavities, Waveguides, Silicon, SOI, Fabrication, Characterization
50216
Play Button
Characterization of Surface Modifications by White Light Interferometry: Applications in Ion Sputtering, Laser Ablation, and Tribology Experiments
Authors: Sergey V. Baryshev, Robert A. Erck, Jerry F. Moore, Alexander V. Zinovev, C. Emil Tripa, Igor V. Veryovkin.
Institutions: Argonne National Laboratory, Argonne National Laboratory, MassThink LLC.
In materials science and engineering it is often necessary to obtain quantitative measurements of surface topography with micrometer lateral resolution. From the measured surface, 3D topographic maps can be subsequently analyzed using a variety of software packages to extract the information that is needed. In this article we describe how white light interferometry, and optical profilometry (OP) in general, combined with generic surface analysis software, can be used for materials science and engineering tasks. In this article, a number of applications of white light interferometry for investigation of surface modifications in mass spectrometry, and wear phenomena in tribology and lubrication are demonstrated. We characterize the products of the interaction of semiconductors and metals with energetic ions (sputtering), and laser irradiation (ablation), as well as ex situ measurements of wear of tribological test specimens. Specifically, we will discuss: Aspects of traditional ion sputtering-based mass spectrometry such as sputtering rates/yields measurements on Si and Cu and subsequent time-to-depth conversion. Results of quantitative characterization of the interaction of femtosecond laser irradiation with a semiconductor surface. These results are important for applications such as ablation mass spectrometry, where the quantities of evaporated material can be studied and controlled via pulse duration and energy per pulse. Thus, by determining the crater geometry one can define depth and lateral resolution versus experimental setup conditions. Measurements of surface roughness parameters in two dimensions, and quantitative measurements of the surface wear that occur as a result of friction and wear tests. Some inherent drawbacks, possible artifacts, and uncertainty assessments of the white light interferometry approach will be discussed and explained.
Materials Science, Issue 72, Physics, Ion Beams (nuclear interactions), Light Reflection, Optical Properties, Semiconductor Materials, White Light Interferometry, Ion Sputtering, Laser Ablation, Femtosecond Lasers, Depth Profiling, Time-of-flight Mass Spectrometry, Tribology, Wear Analysis, Optical Profilometry, wear, friction, atomic force microscopy, AFM, scanning electron microscopy, SEM, imaging, visualization
50260
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
4262
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
3364
Play Button
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Authors: Rajesh K. Kana, Donna L. Murdaugh, Lauren E. Libero, Mark R. Pennick, Heather M. Wadsworth, Rishi Deshpande, Christi P. Hu.
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7 provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology. A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy
3178
Play Button
Patterned Photostimulation with Digital Micromirror Devices to Investigate Dendritic Integration Across Branch Points
Authors: Conrad W. Liang, Michael Mohammadi, M. Daniel Santos, Cha-Min Tang.
Institutions: University of Maryland School of Medicine.
Light is a versatile and precise means to control neuronal excitability. The recent introduction of light sensitive effectors such as channel-rhodopsin and caged neurotransmitters have led to interests in developing better means to control patterns of light in space and time that are useful for experimental neuroscience. One conventional strategy, employed in confocal and 2-photon microscopy, is to focus light to a diffraction limited spot and then scan that single spot sequentially over the region of interest. This approach becomes problematic if large areas have to be stimulated within a brief time window, a problem more applicable to photostimulation than for imaging. An alternate strategy is to project the complete spatial pattern on the target with the aid of a digital micromirror device (DMD). The DMD approach is appealing because the hardware components are relatively inexpensive and is supported by commercial interests. Because such a system is not available for upright microscopes, we will discuss the critical issues in the construction and operations of such a DMD system. Even though we will be primarily describing the construction of the system for UV photolysis, the modifications for building the much simpler visible light system for optogenetic experiments will also be provided. The UV photolysis system was used to carryout experiments to study a fundamental question in neuroscience, how are spatially distributed inputs integrated across distal dendritic branch points. The results suggest that integration can be non-linear across branch points and the supralinearity is largely mediated by NMDA receptors.
Bioengineering, Issue 49, DMD, photolysis, dendrite, photostimulation, DLP, optogenetics
2003
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.