JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Dynamic analysis and pattern visualization of forest fires.
PLoS ONE
PUBLISHED: 08-19-2014
This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.
Authors: Yves Harder, Daniel Schmauss, Reto Wettstein, José T. Egaña, Fabian Weiss, Andrea Weinzierl, Anna Schuldt, Hans-Günther Machens, Michael D. Menger, Farid Rezaeian.
Published: 11-17-2014
ABSTRACT
Despite profound expertise and advanced surgical techniques, ischemia-induced complications ranging from wound breakdown to extensive tissue necrosis are still occurring, particularly in reconstructive flap surgery. Multiple experimental flap models have been developed to analyze underlying causes and mechanisms and to investigate treatment strategies to prevent ischemic complications. The limiting factor of most models is the lacking possibility to directly and repetitively visualize microvascular architecture and hemodynamics. The goal of the protocol was to present a well-established mouse model affiliating these before mentioned lacking elements. Harder et al. have developed a model of a musculocutaneous flap with a random perfusion pattern that undergoes acute persistent ischemia and results in ~50% necrosis after 10 days if kept untreated. With the aid of intravital epi-fluorescence microscopy, this chamber model allows repetitive visualization of morphology and hemodynamics in different regions of interest over time. Associated processes such as apoptosis, inflammation, microvascular leakage and angiogenesis can be investigated and correlated to immunohistochemical and molecular protein assays. To date, the model has proven feasibility and reproducibility in several published experimental studies investigating the effect of pre-, peri- and postconditioning of ischemically challenged tissue.
25 Related JoVE Articles!
Play Button
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Authors: Nikolai Hentze, Matthias P. Mayer.
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1H/2H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g. crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.   
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
50839
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
50961
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Investigating the Three-dimensional Flow Separation Induced by a Model Vocal Fold Polyp
Authors: Kelley C. Stewart, Byron D. Erath, Michael W. Plesniak.
Institutions: The George Washington University, Clarkson University.
The fluid-structure energy exchange process for normal speech has been studied extensively, but it is not well understood for pathological conditions. Polyps and nodules, which are geometric abnormalities that form on the medial surface of the vocal folds, can disrupt vocal fold dynamics and thus can have devastating consequences on a patient's ability to communicate. Our laboratory has reported particle image velocimetry (PIV) measurements, within an investigation of a model polyp located on the medial surface of an in vitro driven vocal fold model, which show that such a geometric abnormality considerably disrupts the glottal jet behavior. This flow field adjustment is a likely reason for the severe degradation of the vocal quality in patients with polyps. A more complete understanding of the formation and propagation of vortical structures from a geometric protuberance, such as a vocal fold polyp, and the resulting influence on the aerodynamic loadings that drive the vocal fold dynamics, is necessary for advancing the treatment of this pathological condition. The present investigation concerns the three-dimensional flow separation induced by a wall-mounted prolate hemispheroid with a 2:1 aspect ratio in cross flow, i.e. a model vocal fold polyp, using an oil-film visualization technique. Unsteady, three-dimensional flow separation and its impact of the wall pressure loading are examined using skin friction line visualization and wall pressure measurements.
Bioengineering, Issue 84, oil-flow visualization, vocal fold polyp, three-dimensional flow separation, aerodynamic pressure loadings
51080
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
Super-resolution Imaging of the Cytokinetic Z Ring in Live Bacteria Using Fast 3D-Structured Illumination Microscopy (f3D-SIM)
Authors: Lynne Turnbull, Michael P. Strauss, Andrew T. F. Liew, Leigh G. Monahan, Cynthia B. Whitchurch, Elizabeth J. Harry.
Institutions: University of Technology, Sydney.
Imaging of biological samples using fluorescence microscopy has advanced substantially with new technologies to overcome the resolution barrier of the diffraction of light allowing super-resolution of live samples. There are currently three main types of super-resolution techniques – stimulated emission depletion (STED), single-molecule localization microscopy (including techniques such as PALM, STORM, and GDSIM), and structured illumination microscopy (SIM). While STED and single-molecule localization techniques show the largest increases in resolution, they have been slower to offer increased speeds of image acquisition. Three-dimensional SIM (3D-SIM) is a wide-field fluorescence microscopy technique that offers a number of advantages over both single-molecule localization and STED. Resolution is improved, with typical lateral and axial resolutions of 110 and 280 nm, respectively and depth of sampling of up to 30 µm from the coverslip, allowing for imaging of whole cells. Recent advancements (fast 3D-SIM) in the technology increasing the capture rate of raw images allows for fast capture of biological processes occurring in seconds, while significantly reducing photo-toxicity and photobleaching. Here we describe the use of one such method to image bacterial cells harboring the fluorescently-labelled cytokinetic FtsZ protein to show how cells are analyzed and the type of unique information that this technique can provide.
Molecular Biology, Issue 91, super-resolution microscopy, fluorescence microscopy, OMX, 3D-SIM, Blaze, cell division, bacteria, Bacillus subtilis, Staphylococcus aureus, FtsZ, Z ring constriction
51469
Play Button
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Authors: Jennifer L. Koch, David W. Carey.
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga, which creates entry points for infection by one of the Neonectria species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
51515
Play Button
Flat Mount Preparation for Observation and Analysis of Zebrafish Embryo Specimens Stained by Whole Mount In situ Hybridization
Authors: Christina N. Cheng, Yue Li, Amanda N. Marra, Valerie Verdun, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish embryo is now commonly used for basic and biomedical research to investigate the genetic control of developmental processes and to model congenital abnormalities. During the first day of life, the zebrafish embryo progresses through many developmental stages including fertilization, cleavage, gastrulation, segmentation, and the organogenesis of structures such as the kidney, heart, and central nervous system. The anatomy of a young zebrafish embryo presents several challenges for the visualization and analysis of the tissues involved in many of these events because the embryo develops in association with a round yolk mass. Thus, for accurate analysis and imaging of experimental phenotypes in fixed embryonic specimens between the tailbud and 20 somite stage (10 and 19 hours post fertilization (hpf), respectively), such as those stained using whole mount in situ hybridization (WISH), it is often desirable to remove the embryo from the yolk ball and to position it flat on a glass slide. However, performing a flat mount procedure can be tedious. Therefore, successful and efficient flat mount preparation is greatly facilitated through the visual demonstration of the dissection technique, and also helped by using reagents that assist in optimal tissue handling. Here, we provide our WISH protocol for one or two-color detection of gene expression in the zebrafish embryo, and demonstrate how the flat mounting procedure can be performed on this example of a stained fixed specimen. This flat mounting protocol is broadly applicable to the study of many embryonic structures that emerge during early zebrafish development, and can be implemented in conjunction with other staining methods performed on fixed embryo samples.
Developmental Biology, Issue 89, animals, vertebrates, fishes, zebrafish, growth and development, morphogenesis, embryonic and fetal development, organogenesis, natural science disciplines, embryo, whole mount in situ hybridization, flat mount, deyolking, imaging
51604
Play Button
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Authors: Kristen K. McCampbell, Kristin N. Springer, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
51644
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Authors: Laura Ekstrand, Nikolaus Karpinsky, Yajun Wang, Song Zhang.
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
50421
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
Rapid Point-of-Care Assay of Enoxaparin Anticoagulant Efficacy in Whole Blood
Authors: Mario A. Inchiosa Jr., Suryanarayana Pothula, Keshar Kubal, Vajubhai T. Sanchala, Iris Navarro.
Institutions: New York Medical College , New York Medical College .
There is the need for a clinical assay to determine the extent to which a patient's blood is effectively anticoagulated by the low-molecular-weight-heparin (LMWH), enoxaparin. There are also urgent clinical situations where it would be important if this could be determined rapidly. The present assay is designed to accomplish this. We only assayed human blood samples that were spiked with known concentrations of enoxaparin. The essential feature of the present assay is the quantification of the efficacy of enoxaparin in a patient's blood sample by degrading it to complete inactivity with heparinase. Two blood samples were drawn into Vacutainer tubes (Becton-Dickenson; Franklin Lakes, NJ) that were spiked with enoxaparin; one sample was digested with heparinase for 5 min at 37 °C, the other sample represented the patient's baseline anticoagulated status. The percent shortening of clotting time in the heparinase-treated sample, as compared to the baseline state, yielded the anticoagulant contribution of enoxaparin. We used the portable, battery operated Hemochron 801 apparatus for measurements of clotting times (International Technidyne Corp., Edison, NJ). The apparatus has 2 thermostatically controlled (37 °C) assay tube wells. We conducted the assays in two types of assay cartridges that are available from the manufacturer of the instrument. One cartridge was modified to increase its sensitivity. We removed the kaolin from the FTK-ACT cartridge by extensive rinsing with distilled water, leaving only the glass surface of the tube, and perhaps the detection magnet, as activators. We called this our minimally activated assay (MAA). The use of a minimally activated assay has been studied by us and others. 2-4 The second cartridge that was studied was an activated partial thromboplastin time (aPTT) assay (A104). This was used as supplied from the manufacturer. The thermostated wells of the instrument were used for both the heparinase digestion and coagulation assays. The assay can be completed within 10 min. The MAA assay showed robust changes in clotting time after heparinase digestion of enoxaparin over a typical clinical concentration range. At 0.2 anti-Xa I.U. of enoxaparin per ml of blood sample, heparinase digestion caused an average decrease of 9.8% (20.4 sec) in clotting time; at 1.0 I.U. per ml of enoxaparin there was a 41.4% decrease (148.8 sec). This report only presents the experimental application of the assay; its value in a clinical setting must still be established.
Medicine, Issue 68, Immunology, Physiology, Pharmacology, low-molecular-weight-heparin, low-molecular-weight-heparin assay, LMWH point-of-care assay, anti-Factor-Xa activity, enoxaparin, heparinase, whole blood, assay
3852
Play Button
Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays
Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili.
Institutions: University of Toronto, University of Toronto, University of Regina.
Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9.
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression
4056
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
4262
Play Button
Non-invasive Optical Imaging of the Lymphatic Vasculature of a Mouse
Authors: Holly A. Robinson, SunKuk Kwon, Mary A. Hall, John C. Rasmussen, Melissa B. Aldrich, Eva M. Sevick-Muraca.
Institutions: University of Texas Health Science Center-Houston.
The lymphatic vascular system is an important component of the circulatory system that maintains fluid homeostasis, provides immune surveillance, and mediates fat absorption in the gut. Yet despite its critical function, there is comparatively little understanding of how the lymphatic system adapts to serve these functions in health and disease1. Recently, we have demonstrated the ability to dynamically image lymphatic architecture and lymph "pumping" action in normal human subjects as well as in persons suffering lymphatic dysfunction using trace administration of a near-infrared fluorescent (NIRF) dye and a custom, Gen III-intensified imaging system2-4. NIRF imaging showed dramatic changes in lymphatic architecture and function with human disease. It remains unclear how these changes occur and new animal models are being developed to elucidate their genetic and molecular basis. In this protocol, we present NIRF lymphatic, small animal imaging5,6 using indocyanine green (ICG), a dye that has been used for 50 years in humans7, and a NIRF dye-labeled cyclic albumin binding domain (cABD-IRDye800) peptide that preferentially binds mouse and human albumin8. Approximately 5.5 times brighter than ICG, cABD-IRDye800 has a similar lymphatic clearance profile and can be injected in smaller doses than ICG to achieve sufficient NIRF signals for imaging8. Because both cABD-IRDye800 and ICG bind to albumin in the interstitial space8, they both may depict active protein transport into and within the lymphatics. Intradermal (ID) injections (5-50 μl) of ICG (645 μM) or cABD-IRDye800 (200 μM) in saline are administered to the dorsal aspect of each hind paw and/or the left and right side of the base of the tail of an isoflurane-anesthetized mouse. The resulting dye concentration in the animal is 83-1,250 μg/kg for ICG or 113-1,700 μg/kg for cABD-IRDye800. Immediately following injections, functional lymphatic imaging is conducted for up to 1 hr using a customized, small animal NIRF imaging system. Whole animal spatial resolution can depict fluorescent lymphatic vessels of 100 microns or less, and images of structures up to 3 cm in depth can be acquired9. Images are acquired using V++ software and analyzed using ImageJ or MATLAB software. During analysis, consecutive regions of interest (ROIs) encompassing the entire vessel diameter are drawn along a given lymph vessel. The dimensions for each ROI are kept constant for a given vessel and NIRF intensity is measured for each ROI to quantitatively assess "packets" of lymph moving through vessels.
Immunology, Issue 73, Medicine, Anatomy, Physiology, Molecular Biology, Biomedical Engineering, Cancer Biology, Optical imaging, lymphatic imaging, mouse imaging, non-invasive imaging, near-infrared fluorescence, vasculature, circulatory system, lymphatic system, lymph, dermis, injection, imaging, mouse, animal model
4326
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Extracellularly Identifying Motor Neurons for a Muscle Motor Pool in Aplysia californica
Authors: Hui Lu, Jeffrey M. McManus, Hillel J. Chiel.
Institutions: Case Western Reserve University , Case Western Reserve University , Case Western Reserve University .
In animals with large identified neurons (e.g. mollusks), analysis of motor pools is done using intracellular techniques1,2,3,4. Recently, we developed a technique to extracellularly stimulate and record individual neurons in Aplysia californica5. We now describe a protocol for using this technique to uniquely identify and characterize motor neurons within a motor pool. This extracellular technique has advantages. First, extracellular electrodes can stimulate and record neurons through the sheath5, so it does not need to be removed. Thus, neurons will be healthier in extracellular experiments than in intracellular ones. Second, if ganglia are rotated by appropriate pinning of the sheath, extracellular electrodes can access neurons on both sides of the ganglion, which makes it easier and more efficient to identify multiple neurons in the same preparation. Third, extracellular electrodes do not need to penetrate cells, and thus can be easily moved back and forth among neurons, causing less damage to them. This is especially useful when one tries to record multiple neurons during repeating motor patterns that may only persist for minutes. Fourth, extracellular electrodes are more flexible than intracellular ones during muscle movements. Intracellular electrodes may pull out and damage neurons during muscle contractions. In contrast, since extracellular electrodes are gently pressed onto the sheath above neurons, they usually stay above the same neuron during muscle contractions, and thus can be used in more intact preparations. To uniquely identify motor neurons for a motor pool (in particular, the I1/I3 muscle in Aplysia) using extracellular electrodes, one can use features that do not require intracellular measurements as criteria: soma size and location, axonal projection, and muscle innervation4,6,7. For the particular motor pool used to illustrate the technique, we recorded from buccal nerves 2 and 3 to measure axonal projections, and measured the contraction forces of the I1/I3 muscle to determine the pattern of muscle innervation for the individual motor neurons. We demonstrate the complete process of first identifying motor neurons using muscle innervation, then characterizing their timing during motor patterns, creating a simplified diagnostic method for rapid identification. The simplified and more rapid diagnostic method is superior for more intact preparations, e.g. in the suspended buccal mass preparation8 or in vivo9. This process can also be applied in other motor pools10,11,12 in Aplysia or in other animal systems2,3,13,14.
Neuroscience, Issue 73, Physiology, Biomedical Engineering, Anatomy, Behavior, Neurobiology, Animal, Neurosciences, Neurophysiology, Electrophysiology, Aplysia, Aplysia californica, California sea slug, invertebrate, feeding, buccal mass, ganglia, motor neurons, neurons, extracellular stimulation and recordings, extracellular electrodes, animal model
50189
Play Button
Fabrication of Nano-engineered Transparent Conducting Oxides by Pulsed Laser Deposition
Authors: Paolo Gondoni, Matteo Ghidelli, Fabio Di Fonzo, Andrea Li Bassi, Carlo S. Casari.
Institutions: Politecnico di Milano, Instituto Italiano di Tecnologia.
Nanosecond Pulsed Laser Deposition (PLD) in the presence of a background gas allows the deposition of metal oxides with tunable morphology, structure, density and stoichiometry by a proper control of the plasma plume expansion dynamics. Such versatility can be exploited to produce nanostructured films from compact and dense to nanoporous characterized by a hierarchical assembly of nano-sized clusters. In particular we describe the detailed methodology to fabricate two types of Al-doped ZnO (AZO) films as transparent electrodes in photovoltaic devices: 1) at low O2 pressure, compact films with electrical conductivity and optical transparency close to the state of the art transparent conducting oxides (TCO) can be deposited at room temperature, to be compatible with thermally sensitive materials such as polymers used in organic photovoltaics (OPVs); 2) highly light scattering hierarchical structures resembling a forest of nano-trees are produced at higher pressures. Such structures show high Haze factor (>80%) and may be exploited to enhance the light trapping capability. The method here described for AZO films can be applied to other metal oxides relevant for technological applications such as TiO2, Al2O3, WO3 and Ag4O4.
Materials Science, Issue 72, Physics, Nanotechnology, Nanoengineering, Oxides, thin films, thin film theory, deposition and growth, Pulsed laser Deposition (PLD), Transparent conducting oxides (TCO), Hierarchically organized Nanostructured oxides, Al doped ZnO (AZO) films, enhanced light scattering capability, gases, deposition, nanoporus, nanoparticles, Van der Pauw, scanning electron microscopy, SEM
50297
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Computer-Generated Animal Model Stimuli
Authors: Kevin L. Woo.
Institutions: Macquarie University.
Communication between animals is diverse and complex. Animals may communicate using auditory, seismic, chemosensory, electrical, or visual signals. In particular, understanding the constraints on visual signal design for communication has been of great interest. Traditional methods for investigating animal interactions have used basic observational techniques, staged encounters, or physical manipulation of morphology. Less intrusive methods have tried to simulate conspecifics using crude playback tools, such as mirrors, still images, or models. As technology has become more advanced, video playback has emerged as another tool in which to examine visual communication (Rosenthal, 2000). However, to move one step further, the application of computer-animation now allows researchers to specifically isolate critical components necessary to elicit social responses from conspecifics, and manipulate these features to control interactions. Here, I provide detail on how to create an animation using the Jacky dragon as a model, but this process may be adaptable for other species. In building the animation, I elected to use Lightwave 3D to alter object morphology, add texture, install bones, and provide comparable weight shading that prevents exaggerated movement. The animation is then matched to select motor patterns to replicate critical movement features. Finally, the sequence must rendered into an individual clip for presentation. Although there are other adaptable techniques, this particular method had been demonstrated to be effective in eliciting both conspicuous and social responses in staged interactions.
Neuroscience, Issue 6, behavior, lizard, simulation, animation
243
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.