One of the major questions in microbial ecology is “who is there?” This question can be answered using various tools, but one of the long-lasting gold standards is to sequence 16S ribosomal RNA (rRNA) gene amplicons generated by domain-level PCR reactions amplifying from genomic DNA. Traditionally, this was performed by cloning and Sanger (capillary electrophoresis) sequencing of PCR amplicons. The advent of next-generation sequencing has tremendously simplified and increased the sequencing depth for 16S rRNA gene sequencing. The introduction of benchtop sequencers now allows small labs to perform their 16S rRNA sequencing in-house in a matter of days. Here, an approach for 16S rRNA gene amplicon sequencing using a benchtop next-generation sequencer is detailed. The environmental DNA is first amplified by PCR using primers that contain sequencing adapters and barcodes. They are then coupled to spherical particles via emulsion PCR. The particles are loaded on a disposable chip and the chip is inserted in the sequencing machine after which the sequencing is performed. The sequences are retrieved in fastq format, filtered and the barcodes are used to establish the sample membership of the reads. The filtered and binned reads are then further analyzed using publically available tools. An example analysis where the reads were classified with a taxonomy-finding algorithm within the software package Mothur is given. The method outlined here is simple, inexpensive and straightforward and should help smaller labs to take advantage from the ongoing genomic revolution.
21 Related JoVE Articles!
Identification of Metabolically Active Bacteria in the Gut of the Generalist Spodoptera littoralis via DNA Stable Isotope Probing Using 13C-Glucose
Institutions: Max Planck Institute for Chemical Ecology.
Guts of most insects are inhabited by complex communities of symbiotic nonpathogenic bacteria. Within such microbial communities it is possible to identify commensal or mutualistic bacteria species. The latter ones, have been observed to serve multiple functions to the insect, i.e.
helping in insect reproduction1
, boosting the immune response2
, pheromone production3
, as well as nutrition, including the synthesis of essential amino acids4,
Due to the importance of these associations, many efforts have been made to characterize the communities down to the individual members. However, most of these efforts were either based on cultivation methods or relied on the generation of 16S rRNA gene fragments which were sequenced for final identification. Unfortunately, these approaches only identified the bacterial species present in the gut and provided no information on the metabolic activity of the microorganisms.
To characterize the metabolically active bacterial species in the gut of an insect, we used stable isotope probing (SIP) in vivo
C-glucose as a universal substrate. This is a promising culture-free technique that allows the linkage of microbial phylogenies to their particular metabolic activity. This is possible by tracking stable, isotope labeled atoms from substrates into microbial biomarkers, such as DNA and RNA5
. The incorporation of 13
C isotopes into DNA increases the density of the labeled DNA compared to the unlabeled (12
C) one. In the end, the 13
C-labeled DNA or RNA is separated by density-gradient ultracentrifugation from the 12
C-unlabeled similar one6
. Subsequent molecular analysis of the separated nucleic acid isotopomers provides the connection between metabolic activity and identity of the species.
Here, we present the protocol used to characterize the metabolically active bacteria in the gut of a generalist insect (our model system), Spodoptera littoralis
). The phylogenetic analysis of the DNA was done using pyrosequencing, which allowed high resolution and precision in the identification of insect gut bacterial community. As main substrate, 13
C-labeled glucose was used in the experiments. The substrate was fed to the insects using an artificial diet.
Microbiology, Issue 81, Insects, Sequence Analysis, Genetics, Microbial, Bacteria, Lepidoptera, Spodoptera littoralis, stable-isotope-probing (SIP), pyro-sequencing, 13C-glucose, gut, microbiota, bacteria
Pyrosequencing for Microbial Identification and Characterization
Institutions: Johns Hopkins University, Qiagen Sciences, Inc..
Pyrosequencing is a versatile technique that facilitates microbial genome sequencing that can be used to identify bacterial species, discriminate bacterial strains and detect genetic mutations that confer resistance to anti-microbial agents. The advantages of pyrosequencing for microbiology applications include rapid and reliable high-throughput screening and accurate identification of microbes and microbial genome mutations. Pyrosequencing involves sequencing of DNA by synthesizing the complementary strand a single base at a time, while determining the specific nucleotide being incorporated during the synthesis reaction. The reaction occurs on immobilized single stranded template DNA where the four deoxyribonucleotides (dNTP) are added sequentially and the unincorporated dNTPs are enzymatically degraded before addition of the next dNTP to the synthesis reaction. Detection of the specific base incorporated into the template is monitored by generation of chemiluminescent signals. The order of dNTPs that produce the chemiluminescent signals determines the DNA sequence of the template. The real-time sequencing capability of pyrosequencing technology enables rapid microbial identification in a single assay. In addition, the pyrosequencing instrument, can analyze the full genetic diversity of anti-microbial drug resistance, including typing of SNPs, point mutations, insertions, and deletions, as well as quantification of multiple gene copies that may occur in some anti-microbial resistance patterns.
Microbiology, Issue 78, Genetics, Molecular Biology, Basic Protocols, Genomics, Eukaryota, Bacteria, Viruses, Bacterial Infections and Mycoses, Virus Diseases, Diagnosis, Therapeutics, Equipment and Supplies, Technology, Industry, and Agriculture, Life Sciences (General), Pyrosequencing, DNA, Microbe, PCR, primers, Next-Generation, high-throughput, sequencing
Chromatin Immunoprecipitation (ChIP) using Drosophila tissue
Institutions: Johns Hopkins University.
Epigenetics remains a rapidly developing field that studies how the chromatin state contributes to differential gene expression in distinct cell types at different developmental stages. Epigenetic regulation contributes to a broad spectrum of biological processes, including cellular differentiation during embryonic development and homeostasis in adulthood. A critical strategy in epigenetic studies is to examine how various histone modifications and chromatin factors regulate gene expression. To address this, Chromatin Immunoprecipitation (ChIP) is used widely to obtain a snapshot of the association of particular factors with DNA in the cells of interest.
ChIP technique commonly uses cultured cells as starting material, which can be obtained in abundance and homogeneity to generate reproducible data. However, there are several caveats: First, the environment to grow cells in Petri dish is different from that in vivo
, thus may not reflect the endogenous chromatin state of cells in a living organism. Second, not all types of cells can be cultured ex vivo
. There are only a limited number of cell lines, from which people can obtain enough material for ChIP assay.
Here we describe a method to do ChIP experiment using Drosophila
tissues. The starting material is dissected tissue from a living animal, thus can accurately reflect the endogenous chromatin state. The adaptability of this method with many different types of tissue will allow researchers to address a lot more biologically relevant questions regarding epigenetic regulation in vivo1, 2
. Combining this method with high-throughput sequencing (ChIP-seq) will further allow researchers to obtain an epigenomic landscape.
Genetics, Issue 61, ChIP, Drosophila, testes, q-PCR, high throughput sequencing, epi-genetics
Detecting Somatic Genetic Alterations in Tumor Specimens by Exon Capture and Massively Parallel Sequencing
Institutions: Memorial Sloan-Kettering Cancer Center, Memorial Sloan-Kettering Cancer Center.
Efforts to detect and investigate key oncogenic mutations have proven valuable to facilitate the appropriate treatment for cancer patients. The establishment of high-throughput, massively parallel "next-generation" sequencing has aided the discovery of many such mutations. To enhance the clinical and translational utility of this technology, platforms must be high-throughput, cost-effective, and compatible with formalin-fixed paraffin embedded (FFPE) tissue samples that may yield small amounts of degraded or damaged DNA. Here, we describe the preparation of barcoded and multiplexed DNA libraries followed by hybridization-based capture of targeted exons for the detection of cancer-associated mutations in fresh frozen and FFPE tumors by massively parallel sequencing. This method enables the identification of sequence mutations, copy number alterations, and select structural rearrangements involving all targeted genes. Targeted exon sequencing offers the benefits of high throughput, low cost, and deep sequence coverage, thus conferring high sensitivity for detecting low frequency mutations.
Molecular Biology, Issue 80, Molecular Diagnostic Techniques, High-Throughput Nucleotide Sequencing, Genetics, Neoplasms, Diagnosis, Massively parallel sequencing, targeted exon sequencing, hybridization capture, cancer, FFPE, DNA mutations
RNA-Seq Analysis of Differential Gene Expression in Electroporated Chick Embryonic Spinal Cord
Institutions: Universidade de São Paulo.
electroporation of the chick neural tube is a fast and inexpensive method for identification of gene function during neural development. Genome wide analysis of differentially expressed transcripts after such an experimental manipulation has the potential to uncover an almost complete picture of the downstream effects caused by the transfected construct. This work describes a simple method for comparing transcriptomes from samples of transfected embryonic spinal cords comprising all steps between electroporation and identification of differentially expressed transcripts. The first stage consists of guidelines for electroporation and instructions for dissection of transfected spinal cord halves from HH23 embryos in ribonuclease-free environment and extraction of high-quality RNA samples suitable for transcriptome sequencing. The next stage is that of bioinformatic analysis with general guidelines for filtering and comparison of RNA-Seq datasets in the Galaxy public server, which eliminates the need of a local computational structure for small to medium scale experiments. The representative results show that the dissection methods generate high quality RNA samples and that the transcriptomes obtained from two control samples are essentially the same, an important requirement for detection of differential expression genes in experimental samples. Furthermore, one example is provided where experimental overexpression of a DNA construct can be visually verified after comparison with control samples. The application of this method may be a powerful tool to facilitate new discoveries on the function of neural factors involved in spinal cord early development.
Developmental Biology, Issue 93, chicken embryo, in ovo electroporation, spinal cord, RNA-Seq, transcriptome profiling, Galaxy workflow
Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice
Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine.
Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro
using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro
. Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo
. Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo
tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro
and in vivo
and may be useful in preclinical drug development for these devastating diseases.
Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro
. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro
replication of HIV-1 as influenced by the gag
gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag
gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro
replication of chronically derived gag-pro
sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
A High Throughput Screen for Biomining Cellulase Activity from Metagenomic Libraries
Institutions: University of British Columbia - UBC.
Cellulose, the most abundant source of organic carbon on the planet, has wide-ranging industrial applications with increasing emphasis on biofuel production 1
. Chemical methods to modify or degrade cellulose typically require strong acids and high temperatures. As such, enzymatic methods have become prominent in the bioconversion process. While the identification of active cellulases from bacterial and fungal isolates has been somewhat effective, the vast majority of microbes in nature resist laboratory cultivation. Environmental genomic, also known as metagenomic, screening approaches have great promise in bridging the cultivation gap in the search for novel bioconversion enzymes. Metagenomic screening approaches have successfully recovered novel cellulases from environments as varied as soils 2
, buffalo rumen 3
and the termite hind-gut 4
using carboxymethylcellulose (CMC) agar plates stained with congo red dye (based on the method of Teather and Wood 5
). However, the CMC method is limited in throughput, is not quantitative and manifests a low signal to noise ratio 6
. Other methods have been reported 7,8
but each use an agar plate-based assay, which is undesirable for high-throughput screening of large insert genomic libraries. Here we present a solution-based screen for cellulase activity using a chromogenic dinitrophenol (DNP)-cellobioside substrate 9
. Our library was cloned into the pCC1 copy control fosmid to increase assay sensitivity through copy number induction 10
. The method uses one-pot chemistry in 384-well microplates with the final readout provided as an absorbance measurement. This readout is quantitative, sensitive and automated with a throughput of up to 100X 384-well plates per day using a liquid handler and plate reader with attached stacking system.
Microbiology, Issue 48, Cellulase, cellulose, DNP-cellobioside, metagenomics, metagenome, environmental genomics, functional metagenomics
DNA Stable-Isotope Probing (DNA-SIP)
Institutions: University of Waterloo.
DNA stable-isotope probing (DNA-SIP) is a powerful technique for identifying active microorganisms that assimilate particular carbon substrates and nutrients into cellular biomass. As such, this cultivation-independent technique has been an important methodology for assigning metabolic function to the diverse communities inhabiting a wide range of terrestrial and aquatic environments. Following the incubation of an environmental sample with stable-isotope labelled compounds, extracted nucleic acid is subjected to density gradient ultracentrifugation and subsequent gradient fractionation to separate nucleic acids of differing densities. Purification of DNA from cesium chloride retrieves labelled and unlabelled DNA for subsequent molecular characterization (e.g. fingerprinting, microarrays, clone libraries, metagenomics). This JoVE video protocol provides visual step-by-step explanations of the protocol for density gradient ultracentrifugation, gradient fractionation and recovery of labelled DNA. The protocol also includes sample SIP data and highlights important tips and cautions that must be considered to ensure a successful DNA-SIP analysis.
Microbiology, Issue 42, DNA stable-isotope probing, microbiology, microbial ecology, cultivation-independent, metagenomics, 16S rRNA gene community analysis, substrates, microbial ecology, enrichment
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1
. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2
. Reliably identifying these regions was the focus of our work.
Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5
to more rigorous statistical models, e.g.
Hidden Markov Models (HMMs)6-8
. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized.
Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types.
To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9
, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11
and epigenetic data12
to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Identification of Key Factors Regulating Self-renewal and Differentiation in EML Hematopoietic Precursor Cells by RNA-sequencing Analysis
Institutions: The University of Texas Graduate School of Biomedical Sciences at Houston.
Hematopoietic stem cells (HSCs) are used clinically for transplantation treatment to rebuild a patient's hematopoietic system in many diseases such as leukemia and lymphoma. Elucidating the mechanisms controlling HSCs self-renewal and differentiation is important for application of HSCs for research and clinical uses. However, it is not possible to obtain large quantity of HSCs due to their inability to proliferate in vitro
. To overcome this hurdle, we used a mouse bone marrow derived cell line, the EML (Erythroid, Myeloid, and Lymphocytic) cell line, as a model system for this study.
RNA-sequencing (RNA-Seq) has been increasingly used to replace microarray for gene expression studies. We report here a detailed method of using RNA-Seq technology to investigate the potential key factors in regulation of EML cell self-renewal and differentiation. The protocol provided in this paper is divided into three parts. The first part explains how to culture EML cells and separate Lin-CD34+ and Lin-CD34- cells. The second part of the protocol offers detailed procedures for total RNA preparation and the subsequent library construction for high-throughput sequencing. The last part describes the method for RNA-Seq data analysis and explains how to use the data to identify differentially expressed transcription factors between Lin-CD34+ and Lin-CD34- cells. The most significantly differentially expressed transcription factors were identified to be the potential key regulators controlling EML cell self-renewal and differentiation. In the discussion section of this paper, we highlight the key steps for successful performance of this experiment.
In summary, this paper offers a method of using RNA-Seq technology to identify potential regulators of self-renewal and differentiation in EML cells. The key factors identified are subjected to downstream functional analysis in vitro
and in vivo
Genetics, Issue 93, EML Cells, Self-renewal, Differentiation, Hematopoietic precursor cell, RNA-Sequencing, Data analysis
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2
, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3
. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5
. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8
. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators.
To address this need, we have developed a pooled sequencing approach1,9
and a novel software package1
for highly accurate rare variant detection from the resulting data. The ability to pool genomes from entire populations of affected individuals and survey the degree of genetic variation at multiple targeted regions in a single sequencing library provides excellent cost and time savings to traditional single-sample sequencing methodology. With a mean sequencing coverage per allele of 25-fold, our custom algorithm, SPLINTER, uses an internal variant calling control strategy to call insertions, deletions and substitutions up to four base pairs in length with high sensitivity and specificity from pools of up to 1 mutant allele in 500 individuals. Here we describe the method for preparing the pooled sequencing library followed by step-by-step instructions on how to use the SPLINTER package for pooled sequencing analysis (http://www.ibridgenetwork.org/wustl/splinter). We show a comparison between pooled sequencing of 947 individuals, all of whom also underwent genome-wide array, at over 20kb of sequencing per person. Concordance between genotyping of tagged and novel variants called in the pooled sample were excellent. This method can be easily scaled up to any number of genomic loci and any number of individuals. By incorporating the internal positive and negative amplicon controls at ratios that mimic the population under study, the algorithm can be calibrated for optimal performance. This strategy can also be modified for use with hybridization capture or individual-specific barcodes and can be applied to the sequencing of naturally heterogeneous samples, such as tumor DNA.
Genetics, Issue 64, Genomics, Cancer Biology, Bioinformatics, Pooled DNA sequencing, SPLINTER, rare genetic variants, genetic screening, phenotype, high throughput, computational analysis, DNA, PCR, primers
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
Depletion of Ribosomal RNA for Mosquito Gut Metagenomic RNA-seq
Institutions: New Mexico State University.
The mosquito gut accommodates dynamic microbial communities across different stages of the insect's life cycle. Characterization of the genetic capacity and functionality of the gut community will provide insight into the effects of gut microbiota on mosquito life traits. Metagenomic RNA-Seq has become an important tool to analyze transcriptomes from various microbes present in a microbial community. Messenger RNA usually comprises only 1-3% of total RNA, while rRNA constitutes approximately 90%. It is challenging to enrich messenger RNA from a metagenomic microbial RNA sample because most prokaryotic mRNA species lack stable poly(A) tails. This prevents oligo d(T) mediated mRNA isolation. Here, we describe a protocol that employs sample derived rRNA capture probes to remove rRNA from a metagenomic total RNA sample. To begin, both mosquito and microbial small and large subunit rRNA fragments are amplified from a metagenomic community DNA sample. Then, the community specific biotinylated antisense ribosomal RNA probes are synthesized in vitro
using T7 RNA polymerase. The biotinylated rRNA probes are hybridized to the total RNA. The hybrids are captured by streptavidin-coated beads and removed from the total RNA. This subtraction-based protocol efficiently removes both mosquito and microbial rRNA from the total RNA sample. The mRNA enriched sample is further processed for RNA amplification and RNA-Seq.
Genetics, Issue 74, Infection, Infectious Diseases, Molecular Biology, Cellular Biology, Microbiology, Genomics, biology (general), genetics (animal and plant), life sciences, Eukaryota, Bacteria, metagenomics, metatranscriptome, RNA-seq, rRNA depletion, mRNA enrichment, mosquito gut microbiome, RNA, DNA, sequencing
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
RNA-seq Analysis of Transcriptomes in Thrombin-treated and Control Human Pulmonary Microvascular Endothelial Cells
Institutions: Children's Mercy Hospital and Clinics, School of Medicine, University of Missouri-Kansas City.
The characterization of gene expression in cells via measurement of mRNA levels is a useful tool in determining how the transcriptional machinery of the cell is affected by external signals (e.g.
drug treatment), or how cells differ between a healthy state and a diseased state. With the advent and continuous refinement of next-generation DNA sequencing technology, RNA-sequencing (RNA-seq) has become an increasingly popular method of transcriptome analysis to catalog all species of transcripts, to determine the transcriptional structure of all expressed genes and to quantify the changing expression levels of the total set of transcripts in a given cell, tissue or organism1,2
. RNA-seq is gradually replacing DNA microarrays as a preferred method for transcriptome analysis because it has the advantages of profiling a complete transcriptome, providing a digital type datum (copy number of any transcript) and not relying on any known genomic sequence3
Here, we present a complete and detailed protocol to apply RNA-seq to profile transcriptomes in human pulmonary microvascular endothelial cells with or without thrombin treatment. This protocol is based on our recent published study entitled "RNA-seq Reveals Novel Transcriptome of Genes and Their Isoforms in Human Pulmonary Microvascular Endothelial Cells Treated with Thrombin,"4
in which we successfully performed the first complete transcriptome analysis of human pulmonary microvascular endothelial cells treated with thrombin using RNA-seq. It yielded unprecedented resources for further experimentation to gain insights into molecular mechanisms underlying thrombin-mediated endothelial dysfunction in the pathogenesis of inflammatory conditions, cancer, diabetes, and coronary heart disease, and provides potential new leads for therapeutic targets to those diseases.
The descriptive text of this protocol is divided into four parts. The first part describes the treatment of human pulmonary microvascular endothelial cells with thrombin and RNA isolation, quality analysis and quantification. The second part describes library construction and sequencing. The third part describes the data analysis. The fourth part describes an RT-PCR validation assay. Representative results of several key steps are displayed. Useful tips or precautions to boost success in key steps are provided in the Discussion section. Although this protocol uses human pulmonary microvascular endothelial cells treated with thrombin, it can be generalized to profile transcriptomes in both mammalian and non-mammalian cells and in tissues treated with different stimuli or inhibitors, or to compare transcriptomes in cells or tissues between a healthy state and a disease state.
Genetics, Issue 72, Molecular Biology, Immunology, Medicine, Genomics, Proteins, RNA-seq, Next Generation DNA Sequencing, Transcriptome, Transcription, Thrombin, Endothelial cells, high-throughput, DNA, genomic DNA, RT-PCR, PCR
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo
human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g.
Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2
. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs.
LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors.
While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Measuring Neural and Behavioral Activity During Ongoing Computerized Social Interactions: An Examination of Event-Related Brain Potentials
Institutions: Illinois Wesleyan University.
Social exclusion is a complex social phenomenon with powerful negative consequences. Given the impact of social exclusion on mental and emotional health, an understanding of how perceptions of social exclusion develop over the course of a social interaction is important for advancing treatments aimed at lessening the harmful costs of being excluded. To date, most scientific examinations of social exclusion have looked at exclusion after a social interaction has been completed. While this has been very helpful in developing an understanding of what happens to a person following exclusion, it has not helped to clarify the moment-to-moment dynamics of the process of social exclusion. Accordingly, the current protocol was developed to obtain an improved understanding of social exclusion by examining the patterns of event-related brain activation that are present during social interactions. This protocol allows greater precision and sensitivity in detailing the social processes that lead people to feel as though they have been excluded from a social interaction. Importantly, the current protocol can be adapted to include research projects that vary the nature of exclusionary social interactions by altering how frequently participants are included, how long the periods of exclusion will last in each interaction, and when exclusion will take place during the social interactions. Further, the current protocol can be used to examine variables and constructs beyond those related to social exclusion. This capability to address a variety of applications across psychology by obtaining both neural and behavioral data during ongoing social interactions suggests the present protocol could be at the core of a developing area of scientific inquiry related to social interactions.
Behavior, Issue 93, Event-related brain potentials (ERPs), Social Exclusion, Neuroscience, N2, P3, Cognitive Control
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif