JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Fractional vegetation cover estimation based on an improved selective endmember spectral mixture model.
.
PLoS ONE
PUBLISHED: 04-24-2015
Vegetation is an important part of ecosystem and estimation of fractional vegetation cover is of significant meaning to monitoring of vegetation growth in a certain region. With Landsat TM images and HJ-1B images as data source, an improved selective endmember linear spectral mixture model (SELSMM) was put forward in this research to estimate the fractional vegetation cover in Huangfuchuan watershed in China. We compared the result with the vegetation coverage estimated with linear spectral mixture model (LSMM) and conducted accuracy test on the two results with field survey data to study the effectiveness of different models in estimation of vegetation coverage. Results indicated that: (1) the RMSE of the estimation result of SELSMM based on TM images is the lowest, which is 0.044. The RMSEs of the estimation results of LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.052, 0.077 and 0.082, which are all higher than that of SELSMM based on TM images; (2) the R2 of SELSMM based on TM images, LSMM based on TM images, SELSMM based on HJ-1B images and LSMM based on HJ-1B images are respectively 0.668, 0.531, 0.342 and 0.336. Among these models, SELSMM based on TM images has the highest estimation accuracy and also the highest correlation with measured vegetation coverage. Of the two methods tested, SELSMM is superior to LSMM in estimation of vegetation coverage and it is also better at unmixing mixed pixels of TM images than pixels of HJ-1B images. So, the SELSMM based on TM images is comparatively accurate and reliable in the research of regional fractional vegetation cover estimation.
ABSTRACT
Understanding the source of pollution in a stream is vital to preserving, restoring, and maintaining the stream’s function and habitat it provides. Sediments from highly eroding streambanks are a major source of pollution in a stream system and have the potential to jeopardize habitat, infrastructure, and stream function. Watershed management practices throughout the Cleveland Metroparks attempt to locate and inventory the source and rate the risk of potential streambank erosion to assist in formulating effect stream, riparian, and habitat management recommendations. The Bank Erosion Hazard Index (BEHI), developed by David Rosgen of Wildland Hydrology is a fluvial geomorphic assessment procedure used to evaluate the susceptibility of potential streambank erosion based on a combination of several variables that are sensitive to various processes of erosion. This protocol can be time consuming, difficult for non-professionals, and confined to specific geomorphic regions. To address these constraints and assist in maintaining consistency and reducing user bias, modifications to this protocol include a “Pre-Screening Questionnaire”, elimination of the Study Bank-Height Ratio metric including the bankfull determination, and an adjusted scoring system. This modified protocol was used to assess several high priority streams within the Cleveland Metroparks. The original BEHI protocol was also used to confirm the results of the modified BEHI protocol. After using the modified assessment in the field, and comparing it to the original BEHI method, the two were found to produce comparable BEHI ratings of the streambanks, while significantly reducing the amount of time and resources needed to complete the modified protocol.
20 Related JoVE Articles!
Play Button
Integrated Field Lysimetry and Porewater Sampling for Evaluation of Chemical Mobility in Soils and Established Vegetation
Authors: Audrey R. Matteson, Denis J. Mahoney, Travis W. Gannon, Matthew L. Polizzotto.
Institutions: North Carolina State University, North Carolina State University.
Potentially toxic chemicals are routinely applied to land to meet growing demands on waste management and food production, but the fate of these chemicals is often not well understood. Here we demonstrate an integrated field lysimetry and porewater sampling method for evaluating the mobility of chemicals applied to soils and established vegetation. Lysimeters, open columns made of metal or plastic, are driven into bareground or vegetated soils. Porewater samplers, which are commercially available and use vacuum to collect percolating soil water, are installed at predetermined depths within the lysimeters. At prearranged times following chemical application to experimental plots, porewater is collected, and lysimeters, containing soil and vegetation, are exhumed. By analyzing chemical concentrations in the lysimeter soil, vegetation, and porewater, downward leaching rates, soil retention capacities, and plant uptake for the chemical of interest may be quantified. Because field lysimetry and porewater sampling are conducted under natural environmental conditions and with minimal soil disturbance, derived results project real-case scenarios and provide valuable information for chemical management. As chemicals are increasingly applied to land worldwide, the described techniques may be utilized to determine whether applied chemicals pose adverse effects to human health or the environment.
Environmental Sciences, Issue 89, Lysimetry, porewater, soil, chemical leaching, pesticides, turfgrass, waste
51862
Play Button
Recording Multicellular Behavior in Myxococcus xanthus Biofilms using Time-lapse Microcinematography
Authors: Rion G. Taylor, Roy D. Welch.
Institutions: University of South Carolina (USC), Syracuse University.
A swarm of the δ-proteobacterium Myxococcus xanthus contains millions of cells that act as a collective, coordinating movement through a series of signals to create complex, dynamic patterns as a response to environmental cues. These patterns are self-organizing and emergent; they cannot be predicted by observing the behavior of the individual cells. Using a time-lapse microcinematography tracking assay, we identified a distinct emergent pattern in M. xanthus called chemotaxis, defined as the directed movement of a swarm up a nutrient gradient toward its source 1. In order to efficiently characterize chemotaxis via time-lapse microcinematography, we developed a highly modifiable plate complex (Figure 1) and constructed a cluster of 8 microscopes (Figure 2), each capable of capturing time-lapse videos. The assay is rigorous enough to allow consistent replication of quantifiable data, and the resulting videos allow us to observe and track subtle changes in swarm behavior. Once captured, the videos are transferred to an analysis/storage computer with enough memory to process and store thousands of videos. The flexibility of this setup has proven useful to several members of the M. xanthus community.
Microbiology, Issue 42, microcinematography, Myxococcus, chemotaxis, time-lapse
2038
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
51631
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Topographical Estimation of Visual Population Receptive Fields by fMRI
Authors: Sangkyun Lee, Amalia Papanikolaou, Georgios A. Keliris, Stelios M. Smirnakis.
Institutions: Baylor College of Medicine, Max Planck Institute for Biological Cybernetics, Bernstein Center for Computational Neuroscience.
Visual cortex is retinotopically organized so that neighboring populations of cells map to neighboring parts of the visual field. Functional magnetic resonance imaging allows us to estimate voxel-based population receptive fields (pRF), i.e., the part of the visual field that activates the cells within each voxel. Prior, direct, pRF estimation methods1 suffer from certain limitations: 1) the pRF model is chosen a-priori and may not fully capture the actual pRF shape, and 2) pRF centers are prone to mislocalization near the border of the stimulus space. Here a new topographical pRF estimation method2 is proposed that largely circumvents these limitations. A linear model is used to predict the Blood Oxygen Level-Dependent (BOLD) signal by convolving the linear response of the pRF to the visual stimulus with the canonical hemodynamic response function. PRF topography is represented as a weight vector whose components represent the strength of the aggregate response of voxel neurons to stimuli presented at different visual field locations. The resulting linear equations can be solved for the pRF weight vector using ridge regression3, yielding the pRF topography. A pRF model that is matched to the estimated topography can then be chosen post-hoc, thereby improving the estimates of pRF parameters such as pRF-center location, pRF orientation, size, etc. Having the pRF topography available also allows the visual verification of pRF parameter estimates allowing the extraction of various pRF properties without having to make a-priori assumptions about the pRF structure. This approach promises to be particularly useful for investigating the pRF organization of patients with disorders of the visual system.
Behavior, Issue 96, population receptive field, vision, functional magnetic resonance imaging, retinotopy
51811
Play Button
Metabolomic Analysis of Rat Brain by High Resolution Nuclear Magnetic Resonance Spectroscopy of Tissue Extracts
Authors: Norbert W. Lutz, Evelyne Béraud, Patrick J. Cozzone.
Institutions: Aix-Marseille Université, Aix-Marseille Université.
Studies of gene expression on the RNA and protein levels have long been used to explore biological processes underlying disease. More recently, genomics and proteomics have been complemented by comprehensive quantitative analysis of the metabolite pool present in biological systems. This strategy, termed metabolomics, strives to provide a global characterization of the small-molecule complement involved in metabolism. While the genome and the proteome define the tasks cells can perform, the metabolome is part of the actual phenotype. Among the methods currently used in metabolomics, spectroscopic techniques are of special interest because they allow one to simultaneously analyze a large number of metabolites without prior selection for specific biochemical pathways, thus enabling a broad unbiased approach. Here, an optimized experimental protocol for metabolomic analysis by high-resolution NMR spectroscopy is presented, which is the method of choice for efficient quantification of tissue metabolites. Important strengths of this method are (i) the use of crude extracts, without the need to purify the sample and/or separate metabolites; (ii) the intrinsically quantitative nature of NMR, permitting quantitation of all metabolites represented by an NMR spectrum with one reference compound only; and (iii) the nondestructive nature of NMR enabling repeated use of the same sample for multiple measurements. The dynamic range of metabolite concentrations that can be covered is considerable due to the linear response of NMR signals, although metabolites occurring at extremely low concentrations may be difficult to detect. For the least abundant compounds, the highly sensitive mass spectrometry method may be advantageous although this technique requires more intricate sample preparation and quantification procedures than NMR spectroscopy. We present here an NMR protocol adjusted to rat brain analysis; however, the same protocol can be applied to other tissues with minor modifications.
Neuroscience, Issue 91, metabolomics, brain tissue, rodents, neurochemistry, tissue extracts, NMR spectroscopy, quantitative metabolite analysis, cerebral metabolism, metabolic profile
51829
Play Button
Tracking the Mammary Architectural Features and Detecting Breast Cancer with Magnetic Resonance Diffusion Tensor Imaging
Authors: Noam Nissan, Edna Furman-Haran, Myra Feinberg-Shapiro, Dov Grobgeld, Erez Eyal, Tania Zehavi, Hadassa Degani.
Institutions: Weizmann Institute of Science, Weizmann Institute of Science, Meir Medical Center, Meir Medical Center.
Breast cancer is the most common cause of cancer among women worldwide. Early detection of breast cancer has a critical role in improving the quality of life and survival of breast cancer patients. In this paper a new approach for the detection of breast cancer is described, based on tracking the mammary architectural elements using diffusion tensor imaging (DTI). The paper focuses on the scanning protocols and image processing algorithms and software that were designed to fit the diffusion properties of the mammary fibroglandular tissue and its changes during malignant transformation. The final output yields pixel by pixel vector maps that track the architecture of the entire mammary ductal glandular trees and parametric maps of the diffusion tensor coefficients and anisotropy indices. The efficiency of the method to detect breast cancer was tested by scanning women volunteers including 68 patients with breast cancer confirmed by histopathology findings. Regions with cancer cells exhibited a marked reduction in the diffusion coefficients and in the maximal anisotropy index as compared to the normal breast tissue, providing an intrinsic contrast for delineating the boundaries of malignant growth. Overall, the sensitivity of the DTI parameters to detect breast cancer was found to be high, particularly in dense breasts, and comparable to the current standard breast MRI method that requires injection of a contrast agent. Thus, this method offers a completely non-invasive, safe and sensitive tool for breast cancer detection.
Medicine, Issue 94, Magnetic Resonance Imaging, breast, breast cancer, diagnosis, water diffusion, diffusion tensor imaging
52048
Play Button
Proton Transfer and Protein Conformation Dynamics in Photosensitive Proteins by Time-resolved Step-scan Fourier-transform Infrared Spectroscopy
Authors: Víctor A. Lórenz-Fonfría, Joachim Heberle.
Institutions: Freie Universität Berlin.
Monitoring the dynamics of protonation and protein backbone conformation changes during the function of a protein is an essential step towards understanding its mechanism. Protonation and conformational changes affect the vibration pattern of amino acid side chains and of the peptide bond, respectively, both of which can be probed by infrared (IR) difference spectroscopy. For proteins whose function can be repetitively and reproducibly triggered by light, it is possible to obtain infrared difference spectra with (sub)microsecond resolution over a broad spectral range using the step-scan Fourier transform infrared technique. With ~102-103 repetitions of the photoreaction, the minimum number to complete a scan at reasonable spectral resolution and bandwidth, the noise level in the absorption difference spectra can be as low as ~10-4, sufficient to follow the kinetics of protonation changes from a single amino acid. Lower noise levels can be accomplished by more data averaging and/or mathematical processing. The amount of protein required for optimal results is between 5-100 µg, depending on the sampling technique used. Regarding additional requirements, the protein needs to be first concentrated in a low ionic strength buffer and then dried to form a film. The protein film is hydrated prior to the experiment, either with little droplets of water or under controlled atmospheric humidity. The attained hydration level (g of water / g of protein) is gauged from an IR absorption spectrum. To showcase the technique, we studied the photocycle of the light-driven proton-pump bacteriorhodopsin in its native purple membrane environment, and of the light-gated ion channel channelrhodopsin-2 solubilized in detergent.
Biophysics, Issue 88, bacteriorhodopsin, channelrhodopsin, attenuated total reflection, proton transfer, protein dynamics, infrared spectroscopy, time-resolved spectroscopy, step-scan, membrane proteins, singular value decomposition
51622
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Authors: Francesco Vallania, Enrique Ramos, Sharon Cresci, Robi D. Mitra, Todd E. Druley.
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators. To address this need, we have developed a pooled sequencing approach1,9 and a novel software package1 for highly accurate rare variant detection from the resulting data. The ability to pool genomes from entire populations of affected individuals and survey the degree of genetic variation at multiple targeted regions in a single sequencing library provides excellent cost and time savings to traditional single-sample sequencing methodology. With a mean sequencing coverage per allele of 25-fold, our custom algorithm, SPLINTER, uses an internal variant calling control strategy to call insertions, deletions and substitutions up to four base pairs in length with high sensitivity and specificity from pools of up to 1 mutant allele in 500 individuals. Here we describe the method for preparing the pooled sequencing library followed by step-by-step instructions on how to use the SPLINTER package for pooled sequencing analysis (http://www.ibridgenetwork.org/wustl/splinter). We show a comparison between pooled sequencing of 947 individuals, all of whom also underwent genome-wide array, at over 20kb of sequencing per person. Concordance between genotyping of tagged and novel variants called in the pooled sample were excellent. This method can be easily scaled up to any number of genomic loci and any number of individuals. By incorporating the internal positive and negative amplicon controls at ratios that mimic the population under study, the algorithm can be calibrated for optimal performance. This strategy can also be modified for use with hybridization capture or individual-specific barcodes and can be applied to the sequencing of naturally heterogeneous samples, such as tumor DNA.
Genetics, Issue 64, Genomics, Cancer Biology, Bioinformatics, Pooled DNA sequencing, SPLINTER, rare genetic variants, genetic screening, phenotype, high throughput, computational analysis, DNA, PCR, primers
3943
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Construction of a Preclinical Multimodality Phantom Using Tissue-mimicking Materials for Quality Assurance in Tumor Size Measurement
Authors: Yongsook C. Lee, Gary D. Fullerton, Beth A. Goins.
Institutions: University of Kansas School of Medicine, University of Texas Health Science Center at San Antonio.
World Health Organization (WHO) and the Response Evaluation Criteria in Solid Tumors (RECIST) working groups advocated standardized criteria for radiologic assessment of solid tumors in response to anti-tumor drug therapy in the 1980s and 1990s, respectively. WHO criteria measure solid tumors in two-dimensions, whereas RECIST measurements use only one-dimension which is considered to be more reproducible 1, 2, 3,4,5. These criteria have been widely used as the only imaging biomarker approved by the United States Food and Drug Administration (FDA) 6. In order to measure tumor response to anti-tumor drugs on images with accuracy, therefore, a robust quality assurance (QA) procedures and corresponding QA phantom are needed. To address this need, the authors constructed a preclinical multimodality (for ultrasound (US), computed tomography (CT) and magnetic resonance imaging (MRI)) phantom using tissue-mimicking (TM) materials based on the limited number of target lesions required by RECIST by revising a Gammex US commercial phantom 7. The Appendix in Lee et al. demonstrates the procedures of phantom fabrication 7. In this article, all protocols are introduced in a step-by-step fashion beginning with procedures for preparing the silicone molds for casting tumor-simulating test objects in the phantom, followed by preparation of TM materials for multimodality imaging, and finally construction of the preclinical multimodality QA phantom. The primary purpose of this paper is to provide the protocols to allow anyone interested in independently constructing a phantom for their own projects. QA procedures for tumor size measurement, and RECIST, WHO and volume measurement results of test objects made at multiple institutions using this QA phantom are shown in detail in Lee et al. 8.
Biomedical Engineering, Issue 77, Bioengineering, Medicine, Anatomy, Physiology, Cancer Biology, Molecular Biology, Genetics, Therapeutics, Chemistry and Materials (General), Composite Materials, Quality Assurance and Reliability, Physics (General), Tissue-mimicking materials, Preclinical, Multimodality, Quality assurance, Phantom, Tumor size measurement, Cancer, Imaging
50403
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Engineering Fibrin-based Tissue Constructs from Myofibroblasts and Application of Constraints and Strain to Induce Cell and Collagen Reorganization
Authors: Nicky de Jonge, Frank P. T. Baaijens, Carlijn V. C. Bouten.
Institutions: Eindhoven University of Technology.
Collagen content and organization in developing collagenous tissues can be influenced by local tissue strains and tissue constraint. Tissue engineers aim to use these principles to create tissues with predefined collagen architectures. A full understanding of the exact underlying processes of collagen remodeling to control the final tissue architecture, however, is lacking. In particular, little is known about the (re)orientation of collagen fibers in response to changes in tissue mechanical loading conditions. We developed an in vitro model system, consisting of biaxially-constrained myofibroblast-seeded fibrin constructs, to further elucidate collagen (re)orientation in response to i) reverting biaxial to uniaxial static loading conditions and ii) cyclic uniaxial loading of the biaxially-constrained constructs before and after a change in loading direction, with use of the Flexcell FX4000T loading device. Time-lapse confocal imaging is used to visualize collagen (re)orientation in a nondestructive manner. Cell and collagen organization in the constructs can be visualized in real-time, and an internal reference system allows us to relocate cells and collagen structures for time-lapse analysis. Various aspects of the model system can be adjusted, like cell source or use of healthy and diseased cells. Additives can be used to further elucidate mechanisms underlying collagen remodeling, by for example adding MMPs or blocking integrins. Shape and size of the construct can be easily adapted to specific needs, resulting in a highly tunable model system to study cell and collagen (re)organization.
Bioengineering, Issue 80, Connective Tissue, Myofibroblasts, Heart Valves, Heart Valve Diseases, Mechanotransduction, Cellular, Adaptation, Biological, Cellular Microenvironment, collagen remodeling, fibrin-based tissues, tissue engineering, cardiovascular
51009
Play Button
Trabecular Meshwork Response to Pressure Elevation in the Living Human Eye
Authors: Larry Kagemann, Bo Wang, Gadi Wollstein, Hiroshi Ishikawa, Brandon Mentley, Ian Sigal, Richard A Bilonick, Joel S Schuman.
Institutions: University of Pittsburgh School of Medicine, University of Pittsburgh, University of Pittsburgh School of Medicine, University of Pittsburgh.
The mechanical characteristics of the trabecular meshwork (TM) are linked to outflow resistance and intraocular pressure (IOP) regulation. The rationale behind this technique is the direct observation of the mechanical response of the TM to acute IOP elevation. Prior to scanning, IOP is measured at baseline and during IOP elevation. The limbus is scanned by spectral-domain optical coherence tomography at baseline and during IOP elevation (ophthalmodynamometer (ODM) applied at 30 g force). Scans are processed to enhance visualization of the aqueous humor outflow pathway using ImageJ. Vascular landmarks are used to identify corresponding locations in baseline and IOP elevation scan volumes. Schlemm canal (SC) cross-sectional area (SC-CSA) and SC length from anterior to posterior along its long axis are measured manually at 10 locations within a 1 mm segment of SC. Mean inner to outer wall distance (short axis length) is calculated as the area of SC divided by its long axis length. To examine the contribution of adjacent tissues to the effect IOP elevations, measurements are repeated without and with smooth muscle relaxation with instillation of tropicamide. TM migration into SC is resisted by TM stiffness, but is enhanced by the support of its attachment to adjacent smooth muscle within the ciliary body. This technique is the first to measure the living human TM response to pressure elevation in situ under physiological conditions within the human eye.
Medicine, Issue 100, Optical Coherence Tomography, Trabecular Meshwork, Biomechanics, Intraocular Pressure, Regulation, Aqueous Humor Outflow
52611
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.