JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
SNPpy--database management for SNP data from genome wide association studies.
PLoS ONE
PUBLISHED: 05-20-2011
We describe SNPpy, a hybrid script database system using the Python SQLAlchemy library coupled with the PostgreSQL database to manage genotype data from Genome-Wide Association Studies (GWAS). This system makes it possible to merge study data with HapMap data and merge across studies for meta-analyses, including data filtering based on the values of phenotype and Single-Nucleotide Polymorphism (SNP) data. SNPpy and its dependencies are open source software.
Authors: Amanda K. Rizk, Rima Wardini, Emilie Chan-Thim, Barbara Trutschnigg, Amélie Forget, Véronique Pepin.
Published: 11-08-2013
ABSTRACT
Pulmonary rehabilitation (PR) is an important component in the management of respiratory diseases. The effectiveness of PR is dependent upon adherence to exercise training recommendations. The study of exercise adherence is thus a key step towards the optimization of PR programs. To date, mostly indirect measures, such as rates of participation, completion, and attendance, have been used to determine adherence to PR. The purpose of the present protocol is to describe how continuous data tracking technology can be used to measure adherence to a prescribed aerobic training intensity on a second-by-second basis. In our investigations, adherence has been defined as the percent time spent within a specified target heart rate range. As such, using a combination of hardware and software, heart rate is measured, tracked, and recorded during cycling second-by-second for each participant, for each exercise session. Using statistical software, the data is subsequently extracted and analyzed. The same protocol can be applied to determine adherence to other measures of exercise intensity, such as time spent at a specified wattage, level, or speed on the cycle ergometer. Furthermore, the hardware and software is also available to measure adherence to other modes of training, such as the treadmill, elliptical, stepper, and arm ergometer. The present protocol, therefore, has a vast applicability to directly measure adherence to aerobic exercise.
24 Related JoVE Articles!
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
3806
Play Button
Generation of Comprehensive Thoracic Oncology Database - Tool for Translational Research
Authors: Mosmi Surati, Matthew Robinson, Suvobroto Nandi, Leonardo Faoro, Carley Demchuk, Rajani Kanteti, Benjamin Ferguson, Tara Gangadhar, Thomas Hensing, Rifat Hasina, Aliya Husain, Mark Ferguson, Theodore Karrison, Ravi Salgia.
Institutions: University of Chicago, University of Chicago, Northshore University Health Systems, University of Chicago, University of Chicago, University of Chicago.
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
Medicine, Issue 47, Database, Thoracic oncology, Bioinformatics, Biorepository, Microsoft Access, Proteomics, Genomics
2414
Play Button
A Practical and Novel Method to Extract Genomic DNA from Blood Collection Kits for Plasma Protein Preservation
Authors: Jon Waters, Vishal Dhere, Adam Benjamin, Arvind Sekar, Archana Kumar, Sampath Prahalad, David T. Okou, Subra Kugathasan.
Institutions: Emory University School of Medicine and Children's Health Care of Atlanta, Emory University School of Medicine and Children's Health Care of Atlanta.
Laboratory tests can be done on the cellular or fluid portions of the blood. The use of different blood collection tubes determines the portion of the blood that can be analyzed (whole blood, plasma or serum). Laboratories involved in studying the genetic basis of human disorders rely on anticoagulated whole blood collected in EDTA-containing vacutainer as the source of DNA for genetic / genomic analysis. Because most clinical laboratories perform biochemical, serologic and viral testing as a first step in phenotypic outcome investigation, anticoagulated blood is also collected in heparin-containing tube (plasma tube). Therefore when DNA and plasma are needed for simultaneous and parallel analyses of both genomic and proteomic data, it is customary to collect blood in both EDTA and heparin tubes. If blood could be collected in a single tube and serve as a source for both plasma and DNA, that method would be considered an advancement to existing methods. The use of the compacted blood after plasma extraction represents an alternative source for genomic DNA, thus minimizing the amount of blood samples processed and reducing the number of samples required from each patient. This would ultimately save time and resources. The BD P100 blood collection system for plasma protein preservation were created as an improved method over previous plasma or serum collection tubes1, to stabilize the protein content of blood, enabling better protein biomarker discovery and proteomics experimentation from human blood. The BD P100 tubes contain 15.8 ml of spray-dried K2EDTA and a lyophilized proprietary broad spectrum cocktail of protease inhibitors to prevent coagulation and stabilize the plasma proteins. They also include a mechanical separator, which provides a physical barrier between plasma and cell pellets after centrifugation. Few methods have been devised to extract DNA from clotted blood samples collected in old plasma tubes2-4. Challenges from these methods were mainly associated with the type of separator inside the tubes (gel separator) and included difficulty in recovering the clotted blood, the inconvenience of fragmenting or dispersing the clot, and obstruction of the clot extraction by the separation gel. We present the first method that extracts and purifies genomic DNA from blood drawn in the new BD P100 tubes. We compare the quality of the DNA sample from P100 tubes to that from EDTA tubes. Our approach is simple and efficient. It involves four major steps as follows: 1) the use of a plasma BD P100 (BD Diagnostics, Sparks, MD, USA) tube with mechanical separator for blood collection, 2) the removal of the mechanical separator using a combination of sucrose and a sterile paperclip metallic hook, 3) the separation of the buffy coat layer containing the white cells and 4) the isolation of the genomic DNA from the buffy coat using a regular commercial DNA extraction kit or a similar standard protocol.
Genetics, Issue 75, Molecular Biology, Cellular Biology, Medicine, Biochemistry, Hematology, Proteins, Genomics, genomic DNA, blood collection, P100 tubes, DNA extraction, buffy coat isolation, genotyping assays, red blood, whole blood, plasma, DNA, assay, genotyping
4241
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Authors: Monica Soldi, Tiziana Bonaldi.
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
51220
Play Button
Generation of High Quality Chromatin Immunoprecipitation DNA Template for High-throughput Sequencing (ChIP-seq)
Authors: Sandra Deliard, Jianhua Zhao, Qianghua Xia, Struan F.A. Grant.
Institutions: Children's Hospital of Philadelphia Research Institute, University of Pennsylvania .
ChIP-sequencing (ChIP-seq) methods directly offer whole-genome coverage, where combining chromatin immunoprecipitation (ChIP) and massively parallel sequencing can be utilized to identify the repertoire of mammalian DNA sequences bound by transcription factors in vivo. "Next-generation" genome sequencing technologies provide 1-2 orders of magnitude increase in the amount of sequence that can be cost-effectively generated over older technologies thus allowing for ChIP-seq methods to directly provide whole-genome coverage for effective profiling of mammalian protein-DNA interactions. For successful ChIP-seq approaches, one must generate high quality ChIP DNA template to obtain the best sequencing outcomes. The description is based around experience with the protein product of the gene most strongly implicated in the pathogenesis of type 2 diabetes, namely the transcription factor transcription factor 7-like 2 (TCF7L2). This factor has also been implicated in various cancers. Outlined is how to generate high quality ChIP DNA template derived from the colorectal carcinoma cell line, HCT116, in order to build a high-resolution map through sequencing to determine the genes bound by TCF7L2, giving further insight in to its key role in the pathogenesis of complex traits.
Molecular Biology, Issue 74, Genetics, Biochemistry, Microbiology, Medicine, Proteins, DNA-Binding Proteins, Transcription Factors, Chromatin Immunoprecipitation, Genes, chromatin, immunoprecipitation, ChIP, DNA, PCR, sequencing, antibody, cross-link, cell culture, assay
50286
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Measuring Ascending Aortic Stiffness In Vivo in Mice Using Ultrasound
Authors: Maggie M. Kuo, Viachaslau Barodka, Theodore P. Abraham, Jochen Steppan, Artin A. Shoukas, Mark Butlin, Alberto Avolio, Dan E. Berkowitz, Lakshmi Santhanam.
Institutions: Johns Hopkins University, Johns Hopkins University, Johns Hopkins University, Macquarie University.
We present a protocol for measuring in vivo aortic stiffness in mice using high-resolution ultrasound imaging. Aortic diameter is measured by ultrasound and aortic blood pressure is measured invasively with a solid-state pressure catheter. Blood pressure is raised then lowered incrementally by intravenous infusion of vasoactive drugs phenylephrine and sodium nitroprusside. Aortic diameter is measured for each pressure step to characterize the pressure-diameter relationship of the ascending aorta. Stiffness indices derived from the pressure-diameter relationship can be calculated from the data collected. Calculation of arterial compliance is described in this protocol. This technique can be used to investigate mechanisms underlying increased aortic stiffness associated with cardiovascular disease and aging. The technique produces a physiologically relevant measure of stiffness compared to ex vivo approaches because physiological influences on aortic stiffness are incorporated in the measurement. The primary limitation of this technique is the measurement error introduced from the movement of the aorta during the cardiac cycle. This motion can be compensated by adjusting the location of the probe with the aortic movement as well as making multiple measurements of the aortic pressure-diameter relationship and expanding the experimental group size.
Medicine, Issue 94, Aortic stiffness, ultrasound, in vivo, aortic compliance, elastic modulus, mouse model, cardiovascular disease
52200
Play Button
Quantitative and Automated High-throughput Genome-wide RNAi Screens in C. elegans
Authors: Barbara Squiban, Jérôme Belougne, Jonathan Ewbank, Olivier Zugasti.
Institutions: Université de la Méditerranée.
RNA interference is a powerful method to understand gene function, especially when conducted at a whole-genome scale and in a quantitative context. In C. elegans, gene function can be knocked down simply and efficiently by feeding worms with bacteria expressing a dsRNA corresponding to a specific gene 1. While the creation of libraries of RNAi clones covering most of the C. elegans genome 2,3 opened the way for true functional genomic studies (see for example 4-7), most established methods are laborious. Moy and colleagues have developed semi-automated protocols that facilitate genome-wide screens 8. The approach relies on microscopic imaging and image analysis. Here we describe an alternative protocol for a high-throughput genome-wide screen, based on robotic handling of bacterial RNAi clones, quantitative analysis using the COPAS Biosort (Union Biometrica (UBI)), and an integrated software: the MBioLIMS (Laboratory Information Management System from Modul-Bio) a technology that provides increased throughput for data management and sample tracking. The method allows screens to be conducted on solid medium plates. This is particularly important for some studies, such as those addressing host-pathogen interactions in C. elegans, since certain microbes do not efficiently infect worms in liquid culture. We show how the method can be used to quantify the importance of genes in anti-fungal innate immunity in C. elegans. In this case, the approach relies on the use of a transgenic strain carrying an epidermal infection-inducible fluorescent reporter gene, with GFP under the control of the promoter of the antimicrobial peptide gene nlp 29 and a red fluorescent reporter that is expressed constitutively in the epidermis. The latter provides an internal control for the functional integrity of the epidermis and nonspecific transgene silencing9. When control worms are infected by the fungus they fluoresce green. Knocking down by RNAi a gene required for nlp 29 expression results in diminished fluorescence after infection. Currently, this protocol allows more than 3,000 RNAi clones to be tested and analyzed per week, opening the possibility of screening the entire genome in less than 2 months.
Molecular Biology, Issue 60, C. elegans, fluorescent reporter, Biosort, LIMS, innate immunity, Drechmeria coniospora
3448
Play Button
Detection of the Genome and Transcripts of a Persistent DNA Virus in Neuronal Tissues by Fluorescent In situ Hybridization Combined with Immunostaining
Authors: Frédéric Catez, Antoine Rousseau, Marc Labetoulle, Patrick Lomonte.
Institutions: CNRS UMR 5534, Université de Lyon 1, LabEX DEVweCAN, CNRS UPR 3296, CNRS UMR 5286.
Single cell codetection of a gene, its RNA product and cellular regulatory proteins is critical to study gene expression regulation. This is a challenge in the field of virology; in particular for nuclear-replicating persistent DNA viruses that involve animal models for their study. Herpes simplex virus type 1 (HSV-1) establishes a life-long latent infection in peripheral neurons. Latent virus serves as reservoir, from which it reactivates and induces a new herpetic episode. The cell biology of HSV-1 latency remains poorly understood, in part due to the lack of methods to detect HSV-1 genomes in situ in animal models. We describe a DNA-fluorescent in situ hybridization (FISH) approach efficiently detecting low-copy viral genomes within sections of neuronal tissues from infected animal models. The method relies on heat-based antigen unmasking, and directly labeled home-made DNA probes, or commercially available probes. We developed a triple staining approach, combining DNA-FISH with RNA-FISH and immunofluorescence, using peroxidase based signal amplification to accommodate each staining requirement. A major improvement is the ability to obtain, within 10 µm tissue sections, low-background signals that can be imaged at high resolution by confocal microscopy and wide-field conventional epifluorescence. Additionally, the triple staining worked with a wide range of antibodies directed against cellular and viral proteins. The complete protocol takes 2.5 days to accommodate antibody and probe penetration within the tissue.
Neuroscience, Issue 83, Life Sciences (General), Virology, Herpes Simplex Virus (HSV), Latency, In situ hybridization, Nuclear organization, Gene expression, Microscopy
51091
Play Button
Fruit Volatile Analysis Using an Electronic Nose
Authors: Simona Vallone, Nathan W. Lloyd, Susan E. Ebeler, Florence Zakharov.
Institutions: University of California, Davis, University of California, Davis, University of California, Davis.
Numerous and diverse physiological changes occur during fruit ripening, including the development of a specific volatile blend that characterizes fruit aroma. Maturity at harvest is one of the key factors influencing the flavor quality of fruits and vegetables1. The validation of robust methods that rapidly assess fruit maturity and aroma quality would allow improved management of advanced breeding programs, production practices and postharvest handling. Over the last three decades, much research has been conducted to develop so-called electronic noses, which are devices able to rapidly detect odors and flavors2-4. Currently there are several commercially available electronic noses able to perform volatile analysis, based on different technologies. The electronic nose used in our work (zNose, EST, Newbury Park, CA, USA), consists of ultra-fast gas chromatography coupled with a surface acoustic wave sensor (UFGC-SAW). This technology has already been tested for its ability to monitor quality of various commodities, including detection of deterioration in apple5; ripeness and rot evaluation in mango6; aroma profiling of thymus species7; C6 volatile compounds in grape berries8; characterization of vegetable oil9 and detection of adulterants in virgin coconut oil10. This system can perform the three major steps of aroma analysis: headspace sampling, separation of volatile compounds, and detection. In about one minute, the output, a chromatogram, is produced and, after a purging cycle, the instrument is ready for further analysis. The results obtained with the zNose can be compared to those of other gas-chromatographic systems by calculation of Kovats Indices (KI). Once the instrument has been tuned with an alkane standard solution, the retention times are automatically converted into KIs. However, slight changes in temperature and flow rate are expected to occur over time, causing retention times to drift. Also, depending on the polarity of the column stationary phase, the reproducibility of KI calculations can vary by several index units11. A series of programs and graphical interfaces were therefore developed to compare calculated KIs among samples in a semi-automated fashion. These programs reduce the time required for chromatogram analysis of large data sets and minimize the potential for misinterpretation of the data when chromatograms are not perfectly aligned. We present a method for rapid volatile compound analysis in fruit. Sample preparation, data acquisition and handling procedures are also discussed.
Plant Biology, Issue 61, zNose, volatile profiling, aroma, Kovats Index, electronic nose, gas chromatography, retention time shift
3821
Play Button
Microarray-based Identification of Individual HERV Loci Expression: Application to Biomarker Discovery in Prostate Cancer
Authors: Philippe Pérot, Valérie Cheynet, Myriam Decaussin-Petrucci, Guy Oriol, Nathalie Mugnier, Claire Rodriguez-Lafrasse, Alain Ruffion, François Mallet.
Institutions: Joint Unit Hospices de Lyon-bioMérieux, BioMérieux, Hospices Civils de Lyon, Lyon 1 University, BioMérieux, Hospices Civils de Lyon, Hospices Civils de Lyon.
The prostate-specific antigen (PSA) is the main diagnostic biomarker for prostate cancer in clinical use, but it lacks specificity and sensitivity, particularly in low dosage values1​​. ‘How to use PSA' remains a current issue, either for diagnosis as a gray zone corresponding to a concentration in serum of 2.5-10 ng/ml which does not allow a clear differentiation to be made between cancer and noncancer2 or for patient follow-up as analysis of post-operative PSA kinetic parameters can pose considerable challenges for their practical application3,4. Alternatively, noncoding RNAs (ncRNAs) are emerging as key molecules in human cancer, with the potential to serve as novel markers of disease, e.g. PCA3 in prostate cancer5,6 and to reveal uncharacterized aspects of tumor biology. Moreover, data from the ENCODE project published in 2012 showed that different RNA types cover about 62% of the genome. It also appears that the amount of transcriptional regulatory motifs is at least 4.5x higher than the one corresponding to protein-coding exons. Thus, long terminal repeats (LTRs) of human endogenous retroviruses (HERVs) constitute a wide range of putative/candidate transcriptional regulatory sequences, as it is their primary function in infectious retroviruses. HERVs, which are spread throughout the human genome, originate from ancestral and independent infections within the germ line, followed by copy-paste propagation processes and leading to multicopy families occupying 8% of the human genome (note that exons span 2% of our genome). Some HERV loci still express proteins that have been associated with several pathologies including cancer7-10. We have designed a high-density microarray, in Affymetrix format, aiming to optimally characterize individual HERV loci expression, in order to better understand whether they can be active, if they drive ncRNA transcription or modulate coding gene expression. This tool has been applied in the prostate cancer field (Figure 1).
Medicine, Issue 81, Cancer Biology, Genetics, Molecular Biology, Prostate, Retroviridae, Biomarkers, Pharmacological, Tumor Markers, Biological, Prostatectomy, Microarray Analysis, Gene Expression, Diagnosis, Human Endogenous Retroviruses, HERV, microarray, Transcriptome, prostate cancer, Affymetrix
50713
Play Button
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Authors: Francesco Vallania, Enrique Ramos, Sharon Cresci, Robi D. Mitra, Todd E. Druley.
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators. To address this need, we have developed a pooled sequencing approach1,9 and a novel software package1 for highly accurate rare variant detection from the resulting data. The ability to pool genomes from entire populations of affected individuals and survey the degree of genetic variation at multiple targeted regions in a single sequencing library provides excellent cost and time savings to traditional single-sample sequencing methodology. With a mean sequencing coverage per allele of 25-fold, our custom algorithm, SPLINTER, uses an internal variant calling control strategy to call insertions, deletions and substitutions up to four base pairs in length with high sensitivity and specificity from pools of up to 1 mutant allele in 500 individuals. Here we describe the method for preparing the pooled sequencing library followed by step-by-step instructions on how to use the SPLINTER package for pooled sequencing analysis (http://www.ibridgenetwork.org/wustl/splinter). We show a comparison between pooled sequencing of 947 individuals, all of whom also underwent genome-wide array, at over 20kb of sequencing per person. Concordance between genotyping of tagged and novel variants called in the pooled sample were excellent. This method can be easily scaled up to any number of genomic loci and any number of individuals. By incorporating the internal positive and negative amplicon controls at ratios that mimic the population under study, the algorithm can be calibrated for optimal performance. This strategy can also be modified for use with hybridization capture or individual-specific barcodes and can be applied to the sequencing of naturally heterogeneous samples, such as tumor DNA.
Genetics, Issue 64, Genomics, Cancer Biology, Bioinformatics, Pooled DNA sequencing, SPLINTER, rare genetic variants, genetic screening, phenotype, high throughput, computational analysis, DNA, PCR, primers
3943
Play Button
Infinium Assay for Large-scale SNP Genotyping Applications
Authors: Adam J. Adler, Graham B. Wiley, Patrick M. Gaffney.
Institutions: Oklahoma Medical Research Foundation.
Genotyping variants in the human genome has proven to be an efficient method to identify genetic associations with phenotypes. The distribution of variants within families or populations can facilitate identification of the genetic factors of disease. Illumina's panel of genotyping BeadChips allows investigators to genotype thousands or millions of single nucleotide polymorphisms (SNPs) or to analyze other genomic variants, such as copy number, across a large number of DNA samples. These SNPs can be spread throughout the genome or targeted in specific regions in order to maximize potential discovery. The Infinium assay has been optimized to yield high-quality, accurate results quickly. With proper setup, a single technician can process from a few hundred to over a thousand DNA samples per week, depending on the type of array. This assay guides users through every step, starting with genomic DNA and ending with the scanning of the array. Using propriety reagents, samples are amplified, fragmented, precipitated, resuspended, hybridized to the chip, extended by a single base, stained, and scanned on either an iScan or Hi Scan high-resolution optical imaging system. One overnight step is required to amplify the DNA. The DNA is denatured and isothermally amplified by whole-genome amplification; therefore, no PCR is required. Samples are hybridized to the arrays during a second overnight step. By the third day, the samples are ready to be scanned and analyzed. Amplified DNA may be stockpiled in large quantities, allowing bead arrays to be processed every day of the week, thereby maximizing throughput.
Basic Protocol, Issue 81, genomics, SNP, Genotyping, Infinium, iScan, HiScan, Illumina
50683
Play Button
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Authors: Justen Manasa, Siva Danaviah, Sureshnee Pillay, Prevashinee Padayachee, Hloniphile Mthiyane, Charity Mkhize, Richard John Lessells, Christopher Seebregts, Tobias F. Rinke de Wit, Johannes Viljoen, David Katzenstein, Tulio De Oliveira.
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
51242
Play Button
Identification of Key Factors Regulating Self-renewal and Differentiation in EML Hematopoietic Precursor Cells by RNA-sequencing Analysis
Authors: Shan Zong, Shuyun Deng, Kenian Chen, Jia Qian Wu.
Institutions: The University of Texas Graduate School of Biomedical Sciences at Houston.
Hematopoietic stem cells (HSCs) are used clinically for transplantation treatment to rebuild a patient's hematopoietic system in many diseases such as leukemia and lymphoma. Elucidating the mechanisms controlling HSCs self-renewal and differentiation is important for application of HSCs for research and clinical uses. However, it is not possible to obtain large quantity of HSCs due to their inability to proliferate in vitro. To overcome this hurdle, we used a mouse bone marrow derived cell line, the EML (Erythroid, Myeloid, and Lymphocytic) cell line, as a model system for this study. RNA-sequencing (RNA-Seq) has been increasingly used to replace microarray for gene expression studies. We report here a detailed method of using RNA-Seq technology to investigate the potential key factors in regulation of EML cell self-renewal and differentiation. The protocol provided in this paper is divided into three parts. The first part explains how to culture EML cells and separate Lin-CD34+ and Lin-CD34- cells. The second part of the protocol offers detailed procedures for total RNA preparation and the subsequent library construction for high-throughput sequencing. The last part describes the method for RNA-Seq data analysis and explains how to use the data to identify differentially expressed transcription factors between Lin-CD34+ and Lin-CD34- cells. The most significantly differentially expressed transcription factors were identified to be the potential key regulators controlling EML cell self-renewal and differentiation. In the discussion section of this paper, we highlight the key steps for successful performance of this experiment. In summary, this paper offers a method of using RNA-Seq technology to identify potential regulators of self-renewal and differentiation in EML cells. The key factors identified are subjected to downstream functional analysis in vitro and in vivo.
Genetics, Issue 93, EML Cells, Self-renewal, Differentiation, Hematopoietic precursor cell, RNA-Sequencing, Data analysis
52104
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
50975
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
An Experimental and Bioinformatics Protocol for RNA-seq Analyses of Photoperiodic Diapause in the Asian Tiger Mosquito, Aedes albopictus
Authors: Monica F. Poelchau, Xin Huang, Allison Goff, Julie Reynolds, Peter Armbruster.
Institutions: Georgetown University, The Ohio State University.
Photoperiodic diapause is an important adaptation that allows individuals to escape harsh seasonal environments via a series of physiological changes, most notably developmental arrest and reduced metabolism. Global gene expression profiling via RNA-Seq can provide important insights into the transcriptional mechanisms of photoperiodic diapause. The Asian tiger mosquito, Aedes albopictus, is an outstanding organism for studying the transcriptional bases of diapause due to its ease of rearing, easily induced diapause, and the genomic resources available. This manuscript presents a general experimental workflow for identifying diapause-induced transcriptional differences in A. albopictus. Rearing techniques, conditions necessary to induce diapause and non-diapause development, methods to estimate percent diapause in a population, and RNA extraction and integrity assessment for mosquitoes are documented. A workflow to process RNA-Seq data from Illumina sequencers culminates in a list of differentially expressed genes. The representative results demonstrate that this protocol can be used to effectively identify genes differentially regulated at the transcriptional level in A. albopictus due to photoperiodic differences. With modest adjustments, this workflow can be readily adapted to study the transcriptional bases of diapause or other important life history traits in other mosquitoes.
Genetics, Issue 93, Aedes albopictus Asian tiger mosquito, photoperiodic diapause, RNA-Seq de novo transcriptome assembly, mosquito husbandry
51961
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Authors: David C. Shih, Kevin C. Ho, Kyle M. Melnick, Ronald A. Rensink, Tobias R. Kollmann, Edgardo S. Fortuno III.
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
2397
Play Button
Pyrosequencing: A Simple Method for Accurate Genotyping
Authors: Cristi King, Tiffany Scott-Horton.
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR
630
Play Button
Measuring Blood Pressure in Mice using Volume Pressure Recording, a Tail-cuff Method
Authors: Alan Daugherty, Debra Rateri, Lu Hong, Anju Balakrishnan.
Institutions: University of Kentucky.
The CODA 8-Channel High Throughput Non-Invasive Blood Pressure system measures the blood pressure in up to 8 mice or rats simultaneously. The CODA tail-cuff system uses Volume Pressure Recording (VPR) to measure the blood pressure by determining the tail blood volume. A specially designed differential pressure transducer and an occlusion tail-cuff measure the total blood volume in the tail without the need to obtain the individual pulse signal. Special attention is afforded to the length of the occlusion cuff in order to derive the most accurate blood pressure readings. VPR can easily obtain readings on dark-skinned rodents, such as C57BL6 mice and is MRI compatible. The CODA system provides you with measurements of six (6) different blood pressure parameters; systolic and diastolic blood pressure, heart rate, mean blood pressure, tail blood flow, and tail blood volume. Measurements can be made on either awake or anesthetized mice or rats. The CODA system includes a controller, laptop computer, software, cuffs, animal holders, infrared warming pads, and an infrared thermometer. There are seven different holder sizes for mice as small as 8 grams to rats as large as 900 grams.
Medicine, Issue 27, blood pressure, systolic, diastolic, tail-cuff, mouse, rat
1291
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.