Metagenomic libraries archive large fragments of contiguous genomic sequences from microorganisms without requiring prior cultivation. Generating a streamlined procedure for creating and screening metagenomic libraries is therefore useful for efficient high-throughput investigations into the genetic and metabolic properties of uncultured microbial assemblages. Here, key protocols are presented on video, which we propose is the most useful format for accurately describing a long process that alternately depends on robotic instrumentation and (human) manual interventions. First, we employed robotics to spot library clones onto high-density macroarray membranes, each of which can contain duplicate colonies from twenty-four 384-well library plates. Automation is essential for this procedure not only for accuracy and speed, but also due to the miniaturization of scale required to fit the large number of library clones into highly dense spatial arrangements. Once generated, we next demonstrated how the macroarray membranes can be screened for genes of interest using modified versions of standard protocols for probe labeling, membrane hybridization, and signal detection. We complemented the visual demonstration of these procedures with detailed written descriptions of the steps involved and the materials required, all of which are available online alongside the video.
23 Related JoVE Articles!
Split-and-pool Synthesis and Characterization of Peptide Tertiary Amide Library
Institutions: The Scripps Research Institute.
Peptidomimetics are great sources of protein ligands. The oligomeric nature of these compounds enables us to access large synthetic libraries on solid phase by using combinatorial chemistry. One of the most well studied classes of peptidomimetics is peptoids. Peptoids are easy to synthesize and have been shown to be proteolysis-resistant and cell-permeable. Over the past decade, many useful protein ligands have been identified through screening of peptoid libraries. However, most of the ligands identified from peptoid libraries do not display high affinity, with rare exceptions. This may be due, in part, to the lack of chiral centers and conformational constraints in peptoid molecules. Recently, we described a new synthetic route to access peptide tertiary amides (PTAs). PTAs are a superfamily of peptidomimetics that include but are not limited to peptides, peptoids and N-methylated peptides. With side chains on both α-carbon and main chain nitrogen atoms, the conformation of these molecules are greatly constrained by sterical hindrance and allylic 1,3 strain. (Figure 1
) Our study suggests that these PTA molecules are highly structured in solution and can be used to identify protein ligands. We believe that these molecules can be a future source of high-affinity protein ligands. Here we describe the synthetic method combining the power of both split-and-pool and sub-monomer strategies to synthesize a sample one-bead one-compound (OBOC) library of PTAs.
Chemistry, Issue 88, Split-and-pool synthesis, peptide tertiary amide, PTA, peptoid, high-throughput screening, combinatorial library, solid phase, triphosgene (BTC), one-bead one-compound, OBOC
Scalable High Throughput Selection From Phage-displayed Synthetic Antibody Libraries
Institutions: The Recombinant Antibody Network, University of Toronto, University of California, San Francisco at Mission Bay, The University of Chicago.
The demand for antibodies that fulfill the needs of both basic and clinical research applications is high and will dramatically increase in the future. However, it is apparent that traditional monoclonal technologies are not alone up to this task. This has led to the development of alternate methods to satisfy the demand for high quality and renewable affinity reagents to all accessible elements of the proteome. Toward this end, high throughput methods for conducting selections from phage-displayed synthetic antibody libraries have been devised for applications involving diverse antigens and optimized for rapid throughput and success. Herein, a protocol is described in detail that illustrates with video demonstration the parallel selection of Fab-phage clones from high diversity libraries against hundreds of targets using either a manual 96 channel liquid handler or automated robotics system. Using this protocol, a single user can generate hundreds of antigens, select antibodies to them in parallel and validate antibody binding within 6-8 weeks. Highlighted are: i) a viable antigen format, ii) pre-selection antigen characterization, iii) critical steps that influence the selection of specific and high affinity clones, and iv) ways of monitoring selection effectiveness and early stage antibody clone characterization. With this approach, we have obtained synthetic antibody fragments (Fabs) to many target classes including single-pass membrane receptors, secreted protein hormones, and multi-domain intracellular proteins. These fragments are readily converted to full-length antibodies and have been validated to exhibit high affinity and specificity. Further, they have been demonstrated to be functional in a variety of standard immunoassays including Western blotting, ELISA, cellular immunofluorescence, immunoprecipitation and related assays. This methodology will accelerate antibody discovery and ultimately bring us closer to realizing the goal of generating renewable, high quality antibodies to the proteome.
Immunology, Issue 95, Bacteria, Viruses, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Life Sciences (General), phage display, synthetic antibodies, high throughput, antibody selection, scalable methodology
Massively Parallel Reporter Assays in Cultured Mammalian Cells
Institutions: Broad Institute.
The genetic reporter assay is a well-established and powerful tool for dissecting the relationship between DNA sequences and their gene regulatory activities. The potential throughput of this assay has, however, been limited by the need to individually clone and assay the activity of each sequence on interest using protein fluorescence or enzymatic activity as a proxy for regulatory activity. Advances in high-throughput DNA synthesis and sequencing technologies have recently made it possible to overcome these limitations by multiplexing the construction and interrogation of large libraries of reporter constructs. This protocol describes implementation of a Massively Parallel Reporter Assay (MPRA) that allows direct comparison of hundreds of thousands of putative regulatory sequences in a single cell culture dish.
Genetics, Issue 90, gene regulation, transcriptional regulation, sequence-activity mapping, reporter assay, library cloning, transfection, tag sequencing, mammalian cells
Purifying the Impure: Sequencing Metagenomes and Metatranscriptomes from Complex Animal-associated Samples
Institutions: San Diego State University, DOE Joint Genome Institute, University of Colorado, University of Colorado.
The accessibility of high-throughput sequencing has revolutionized many fields of biology. In order to better understand host-associated viral and microbial communities, a comprehensive workflow for DNA and RNA extraction was developed. The workflow concurrently generates viral and microbial metagenomes, as well as metatranscriptomes, from a single sample for next-generation sequencing. The coupling of these approaches provides an overview of both the taxonomical characteristics and the community encoded functions. The presented methods use Cystic Fibrosis (CF) sputum, a problematic sample type, because it is exceptionally viscous and contains high amount of mucins, free neutrophil DNA, and other unknown contaminants. The protocols described here target these problems and successfully recover viral and microbial DNA with minimal human DNA contamination. To complement the metagenomics studies, a metatranscriptomics protocol was optimized to recover both microbial and host mRNA that contains relatively few ribosomal RNA (rRNA) sequences. An overview of the data characteristics is presented to serve as a reference for assessing the success of the methods. Additional CF sputum samples were also collected to (i) evaluate the consistency of the microbiome profiles across seven consecutive days within a single patient, and (ii) compare the consistency of metagenomic approach to a 16S ribosomal RNA gene-based sequencing. The results showed that daily fluctuation of microbial profiles without antibiotic perturbation was minimal and the taxonomy profiles of the common CF-associated bacteria were highly similar between the 16S rDNA libraries and metagenomes generated from the hypotonic lysis (HL)-derived DNA. However, the differences between 16S rDNA taxonomical profiles generated from total DNA and HL-derived DNA suggest that hypotonic lysis and the washing steps benefit in not only removing the human-derived DNA, but also microbial-derived extracellular DNA that may misrepresent the actual microbial profiles.
Molecular Biology, Issue 94, virome, microbiome, metagenomics, metatranscriptomics, cystic fibrosis, mucosal-surface
Automating ChIP-seq Experiments to Generate Epigenetic Profiles on 10,000 HeLa Cells
Institutions: Diagenode S.A., Diagenode Inc..
Chromatin immunoprecipitation followed by next generation sequencing (ChIP-seq) is a technique of choice for studying protein-DNA interactions. ChIP-seq has been used for mapping protein-DNA interactions and allocating histones modifications. The procedure is tedious and time consuming, and one of the major limitations is the requirement for high amounts of starting material, usually millions of cells. Automation of chromatin immunoprecipitation assays is possible when the procedure is based on the use of magnetic beads. Successful automated protocols of chromatin immunoprecipitation and library preparation have been specifically designed on a commercially available robotic liquid handling system dedicated mainly to automate epigenetic assays. First, validation of automated ChIP-seq assays using antibodies directed against various histone modifications was shown, followed by optimization of the automated protocols to perform chromatin immunoprecipitation and library preparation starting with low cell numbers. The goal of these experiments is to provide a valuable tool for future epigenetic analysis of specific cell types, sub-populations, and biopsy samples.
Molecular Biology, Issue 94, Automation, chromatin immunoprecipitation, low DNA amounts, histone antibodies, sequencing, library preparation
Specificity Analysis of Protein Lysine Methyltransferases Using SPOT Peptide Arrays
Institutions: Stuttgart University.
Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
Biochemistry, Issue 93, Peptide arrays, solid phase peptide synthesis, SPOT synthesis, protein lysine methyltransferases, substrate specificity profile analysis, lysine methylation
Enhanced Reduced Representation Bisulfite Sequencing for Assessment of DNA Methylation at Base Pair Resolution
Institutions: Weill Cornell Medical College, Weill Cornell Medical College, Weill Cornell Medical College, University of Michigan.
DNA methylation pattern mapping is heavily studied in normal and diseased tissues. A variety of methods have been established to interrogate the cytosine methylation patterns in cells. Reduced representation of whole genome bisulfite sequencing was developed to detect quantitative base pair resolution cytosine methylation patterns at GC-rich genomic loci. This is accomplished by combining the use of a restriction enzyme followed by bisulfite conversion. Enhanced Reduced Representation Bisulfite Sequencing (ERRBS) increases the biologically relevant genomic loci covered and has been used to profile cytosine methylation in DNA from human, mouse and other organisms. ERRBS initiates with restriction enzyme digestion of DNA to generate low molecular weight fragments for use in library preparation. These fragments are subjected to standard library construction for next generation sequencing. Bisulfite conversion of unmethylated cytosines prior to the final amplification step allows for quantitative base resolution of cytosine methylation levels in covered genomic loci. The protocol can be completed within four days. Despite low complexity in the first three bases sequenced, ERRBS libraries yield high quality data when using a designated sequencing control lane. Mapping and bioinformatics analysis is then performed and yields data that can be easily integrated with a variety of genome-wide platforms. ERRBS can utilize small input material quantities making it feasible to process human clinical samples and applicable in a range of research applications. The video produced demonstrates critical steps of the ERRBS protocol.
Genetics, Issue 96, Epigenetics, bisulfite sequencing, DNA methylation, genomic DNA, 5-methylcytosine, high-throughput
Nucleocapsid Annealing-Mediated Electrophoresis (NAME) Assay Allows the Rapid Identification of HIV-1 Nucleocapsid Inhibitors
Institutions: University of Padova, SUNY Albany.
RNA or DNA folded in stable tridimensional folding are interesting targets in the development of antitumor or antiviral drugs. In the case of HIV-1, viral proteins involved in the regulation of the virus activity recognize several nucleic acids. The nucleocapsid protein NCp7 (NC) is a key protein regulating several processes during virus replication. NC is in fact a chaperone destabilizing the secondary structures of RNA and DNA and facilitating their annealing. The inactivation of NC is a new approach and an interesting target for anti-HIV therapy. The N
lectrophoresis (NAME) assay was developed to identify molecules able to inhibit the melting and annealing of RNA and DNA folded in thermodynamically stable tridimensional conformations, such as hairpin structures of TAR and cTAR elements of HIV, by the nucleocapsid protein of HIV-1. The new assay employs either the recombinant or the synthetic protein, and oligonucleotides without the need of their previous labeling. The analysis of the results is achieved by standard polyacrylamide gel electrophoresis (PAGE) followed by conventional nucleic acid staining. The protocol reported in this work describes how to perform the NAME assay with the full-length protein or its truncated version lacking the basic N-terminal domain, both competent as nucleic acids chaperones, and how to assess the inhibition of NC chaperone activity by a threading intercalator. Moreover, NAME can be performed in two different modes, useful to obtain indications on the putative mechanism of action of the identified NC inhibitors.
Immunology, Issue 95, HIV-1, Nucleocapsid protein, NCp7, TAR-RNA, DNA, oligonucleotides, annealing, Gel electrophoresis, NAME
Genome-wide Snapshot of Chromatin Regulators and States in Xenopus Embryos by ChIP-Seq
Institutions: MRC National Institute for Medical Research.
The recruitment of chromatin regulators and the assignment of chromatin states to specific genomic loci are pivotal to cell fate decisions and tissue and organ formation during development. Determining the locations and levels of such chromatin features in vivo
will provide valuable information about the spatio-temporal regulation of genomic elements, and will support aspirations to mimic embryonic tissue development in vitro
. The most commonly used method for genome-wide and high-resolution profiling is chromatin immunoprecipitation followed by next-generation sequencing (ChIP-Seq). This protocol outlines how yolk-rich embryos such as those of the frog Xenopus
can be processed for ChIP-Seq experiments, and it offers simple command lines for post-sequencing analysis. Because of the high efficiency with which the protocol extracts nuclei from formaldehyde-fixed tissue, the method allows easy upscaling to obtain enough ChIP material for genome-wide profiling. Our protocol has been used successfully to map various DNA-binding proteins such as transcription factors, signaling mediators, components of the transcription machinery, chromatin modifiers and post-translational histone modifications, and for this to be done at various stages of embryogenesis. Lastly, this protocol should be widely applicable to other model and non-model organisms as more and more genome assemblies become available.
Developmental Biology, Issue 96, Chromatin immunoprecipitation, next-generation sequencing, ChIP-Seq, developmental biology, Xenopus embryos, cross-linking, transcription factor, post-sequencing analysis, DNA occupancy, metagene, binding motif, GO term
Novel Atomic Force Microscopy Based Biopanning for Isolation of Morphology Specific Reagents against TDP-43 Variants in Amyotrophic Lateral Sclerosis
Institutions: Arizona State University, Georgetown University Medical Center, Georgetown University Medical Center.
Because protein variants play critical roles in many diseases including TDP-43 in Amyotrophic Lateral Sclerosis (ALS), alpha-synuclein in Parkinson’s disease and beta-amyloid and tau in Alzheimer’s disease, it is critically important to develop morphology specific reagents that can selectively target these disease-specific protein variants to study the role of these variants in disease pathology and for potential diagnostic and therapeutic applications. We have developed novel atomic force microscopy (AFM) based biopanning techniques that enable isolation of reagents that selectively recognize disease-specific protein variants. There are two key phases involved in the process, the negative and positive panning phases. During the negative panning phase, phages that are reactive to off-target antigens are eliminated through multiple rounds of subtractive panning utilizing a series of carefully selected off-target antigens. A key feature in the negative panning phase is utilizing AFM imaging to monitor the process and confirm that all undesired phage particles are removed. For the positive panning phase, the target antigen of interest is fixed on a mica surface and bound phages are eluted and screened to identify phages that selectively bind the target antigen. The target protein variant does not need to be purified providing the appropriate negative panning controls have been used. Even target protein variants that are only present at very low concentrations in complex biological material can be utilized in the positive panning step. Through application of this technology, we acquired antibodies to protein variants of TDP-43 that are selectively found in human ALS brain tissue. We expect that this protocol should be applicable to generating reagents that selectively bind protein variants present in a wide variety of different biological processes and diseases.
Bioengineering, Issue 96, Amyotrophic Lateral Sclerosis, TDP-43, Biopanning, Atomic Force Microscopy, scFv, Neurodegenerative diseases
A High-throughput-compatible FRET-based Platform for Identification and Characterization of Botulinum Neurotoxin Light Chain Modulators
Institutions: The Scripps Research Institute, The Scripps Research Institute.
Botulinum neurotoxin (BoNT) is a potent and potentially lethal bacterial toxin that binds to host motor neurons, is internalized into the cell, and cleaves intracellular proteins that are essential for neurotransmitter release. BoNT is comprised of a heavy chain (HC), which mediates host cell binding and internalization, and a light chain (LC), which cleaves intracellular host proteins essential for acetylcholine release. While therapies that inhibit toxin binding/internalization have a small time window of administration, compounds that target intracellular LC activity have a much larger time window of administrations, particularly relevant given the extremely long half-life of the toxin. In recent years, small molecules have been heavily analyzed as potential LC inhibitors based on their increased cellular permeability relative to larger therapeutics (peptides, aptamers, etc.
). Lead identification often involves high-throughput screening (HTS), where large libraries of small molecules are screened based on their ability to modulate therapeutic target function. Here we describe a FRET-based assay with a commercial BoNT/A LC substrate and recombinant LC that can be automated for HTS of potential BoNT inhibitors. Moreover, we describe a manual technique that can be used for follow-up secondary screening, or for comparing the potency of several candidate compounds.
Chemistry, Issue 82, BoNT/A, botulinum neurotoxin, high-throughput screening, FRET, inhibitor, FRET peptide substrate, activator
Microwave-assisted Functionalization of Poly(ethylene glycol) and On-resin Peptides for Use in Chain Polymerizations and Hydrogel Formation
Institutions: University of Rochester, University of Rochester, University of Rochester Medical Center.
One of the main benefits to using poly(ethylene glycol) (PEG) macromers in hydrogel formation is synthetic versatility. The ability to draw from a large variety of PEG molecular weights and configurations (arm number, arm length, and branching pattern) affords researchers tight control over resulting hydrogel structures and properties, including Young’s modulus and mesh size. This video will illustrate a rapid, efficient, solvent-free, microwave-assisted method to methacrylate PEG precursors into poly(ethylene glycol) dimethacrylate (PEGDM). This synthetic method provides much-needed starting materials for applications in drug delivery and regenerative medicine. The demonstrated method is superior to traditional methacrylation methods as it is significantly faster and simpler, as well as more economical and environmentally friendly, using smaller amounts of reagents and solvents. We will also demonstrate an adaptation of this technique for on-resin methacrylamide functionalization of peptides. This on-resin method allows the N-terminus of peptides to be functionalized with methacrylamide groups prior to deprotection and cleavage from resin. This allows for selective addition of methacrylamide groups to the N-termini of the peptides while amino acids with reactive side groups (e.g.
primary amine of lysine, primary alcohol of serine, secondary alcohols of threonine, and phenol of tyrosine) remain protected, preventing functionalization at multiple sites. This article will detail common analytical methods (proton Nuclear Magnetic Resonance spectroscopy (;
H-NMR) and Matrix Assisted Laser Desorption Ionization Time of Flight mass spectrometry (MALDI-ToF)) to assess the efficiency of the functionalizations. Common pitfalls and suggested troubleshooting methods will be addressed, as will modifications of the technique which can be used to further tune macromer functionality and resulting hydrogel physical and chemical properties. Use of synthesized products for the formation of hydrogels for drug delivery and cell-material interaction studies will be demonstrated, with particular attention paid to modifying hydrogel composition to affect mesh size, controlling hydrogel stiffness and drug release.
Chemistry, Issue 80, Poly(ethylene glycol), peptides, polymerization, polymers, methacrylation, peptide functionalization, 1H-NMR, MALDI-ToF, hydrogels, macromer synthesis
Molecular Evolution of the Tre Recombinase
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells.
We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution
High Throughput Screening of Fungal Endoglucanase Activity in Escherichia coli
Institutions: California Institute of Technology, California Institute of Technology.
Cellulase enzymes (endoglucanases, cellobiohydrolases, and β-glucosidases) hydrolyze cellulose into component sugars, which in turn can be converted into fuel alcohols1
. The potential for enzymatic hydrolysis of cellulosic biomass to provide renewable energy has intensified efforts to engineer cellulases for economical fuel production2
. Of particular interest are fungal cellulases3-8
, which are already being used industrially for foods and textiles processing.
Identifying active variants among a library of mutant cellulases is critical to the engineering process; active mutants can be further tested for improved properties and/or subjected to additional mutagenesis. Efficient engineering of fungal cellulases has been hampered by a lack of genetic tools for native organisms and by difficulties in expressing the enzymes in heterologous hosts. Recently, Morikawa and coworkers developed a method for expressing in E. coli
the catalytic domains of endoglucanases from H. jecorina3,9
, an important industrial fungus with the capacity to secrete cellulases in large quantities. Functional E. coli
expression has also been reported for cellulases from other fungi, including Macrophomina phaseolina10
and Phanerochaete chrysosporium11-12
We present a method for high throughput screening of fungal endoglucanase activity in E. coli
. (Fig 1
) This method uses the common microbial dye Congo Red (CR) to visualize enzymatic degradation of carboxymethyl cellulose (CMC) by cells growing on solid medium. The activity assay requires inexpensive reagents, minimal manipulation, and gives unambiguous results as zones of degradation (“halos”) at the colony site. Although a quantitative measure of enzymatic activity cannot be determined by this method, we have found that halo size correlates with total enzymatic activity in the cell. Further characterization of individual positive clones will determine , relative protein fitness.
Traditional bacterial whole cell CMC/CR activity assays13
involve pouring agar containing CMC onto colonies, which is subject to cross-contamination, or incubating cultures in CMC agar wells, which is less amenable to large-scale experimentation. Here we report an improved protocol that modifies existing wash methods14
for cellulase activity: cells grown on CMC agar plates are removed prior to CR staining. Our protocol significantly reduces cross-contamination and is highly scalable, allowing the rapid screening of thousands of clones. In addition to H. jecorina enzymes
, we have expressed and screened endoglucanase variants from the Thermoascus aurantiacus
and Penicillium decumbens
(shown in Figure 2
), suggesting that this protocol is applicable to enzymes from a range of organisms.
Molecular Biology, Issue 54, cellulase, endoglucanase, CMC, Congo Red
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Single Read and Paired End mRNA-Seq Illumina Libraries from 10 Nanograms Total RNA
Institutions: Morgridge Institute for Research, University of Wisconsin, University of California.
Whole transcriptome sequencing by mRNA-Seq is now used extensively to perform global gene expression, mutation, allele-specific expression and other genome-wide analyses. mRNA-Seq even opens the gate for gene expression analysis of non-sequenced genomes. mRNA-Seq offers high sensitivity, a large dynamic range and allows measurement of transcript copy numbers in a sample. Illumina’s genome analyzer performs sequencing of a large number (> 107
) of relatively short sequence reads (< 150 bp).The "paired end" approach, wherein a single long read is sequenced at both its ends, allows for tracking alternate splice junctions, insertions and deletions, and is useful for de novo
One of the major challenges faced by researchers is a limited amount of starting material. For example, in experiments where cells are harvested by laser micro-dissection, available starting total RNA may measure in nanograms. Preparation of mRNA-Seq libraries from such samples have been described1, 2
but involves significant PCR amplification that may introduce bias. Other RNA-Seq library construction procedures with minimal PCR amplification have been published3, 4
but require microgram amounts of starting total RNA.
Here we describe a protocol for the Illumina Genome Analyzer II platform for mRNA-Seq sequencing for library preparation that avoids significant PCR amplification and requires only 10 nanograms of total RNA. While this protocol has been described previously and validated for single-end sequencing5
, where it was shown to produce directional libraries without introducing significant amplification bias, here we validate it further for use as a paired end protocol. We selectively amplify polyadenylated messenger RNAs from starting total RNA using the T7 based Eberwine linear amplification method, coined "T7LA" (T7 linear amplification). The amplified poly-A mRNAs are fragmented, reverse transcribed and adapter ligated to produce the final sequencing library. For both single read and paired end runs, sequences are mapped to the human transcriptome6
and normalized so that data from multiple runs can be compared. We report the gene expression measurement in units of transcripts per million (TPM), which is a superior measure to RPKM when comparing samples7
Molecular Biology, Issue 56, Genetics, mRNA-Seq, Illumina-Seq, gene expression profiling, high throughput sequencing
Engineering and Evolution of Synthetic Adeno-Associated Virus (AAV) Gene Therapy Vectors via DNA Family Shuffling
Institutions: Heidelberg University, Heidelberg University.
Adeno-associated viral (AAV) vectors represent some of the most potent and promising vehicles for therapeutic human gene transfer due to a unique combination of beneficial properties1
. These include the apathogenicity of the underlying wildtype viruses and the highly advanced methodologies for production of high-titer, high-purity and clinical-grade recombinant vectors2
. A further particular advantage of the AAV system over other viruses is the availability of a wealth of naturally occurring serotypes which differ in essential properties yet can all be easily engineered as vectors using a common protocol1,2
. Moreover, a number of groups including our own have recently devised strategies to use these natural viruses as templates for the creation of synthetic vectors which either combine the assets of multiple input serotypes, or which enhance the properties of a single isolate. The respective technologies to achieve these goals are either DNA family shuffling3
fragmentation of various AAV capsid genes followed by their re-assembly based on partial homologies (typically >80% for most AAV serotypes), or peptide display4,5
insertion of usually seven amino acids into an exposed loop of the viral capsid where the peptide ideally mediates re-targeting to a desired cell type. For maximum success, both methods are applied in a high-throughput fashion whereby the protocols are up-scaled to yield libraries of around one million distinct capsid variants. Each clone is then comprised of a unique combination of numerous parental viruses (DNA shuffling approach) or contains a distinctive peptide within the same viral backbone (peptide display approach). The subsequent final step is iterative selection of such a library on target cells in order to enrich for individual capsids fulfilling most or ideally all requirements of the selection process. The latter preferably combines positive pressure, such as growth on a certain cell type of interest, with negative selection, for instance elimination of all capsids reacting with anti-AAV antibodies. This combination increases chances that synthetic capsids surviving the selection match the needs of the given application in a manner that would probably not have been found in any naturally occurring AAV isolate. Here, we focus on the DNA family shuffling method as the theoretically and experimentally more challenging of the two technologies. We describe and demonstrate all essential steps for the generation and selection of shuffled AAV libraries (Fig. 1
), and then discuss the pitfalls and critical aspects of the protocols that one needs to be aware of in order to succeed with molecular AAV evolution.
Immunology, Issue 62, Adeno-associated virus, AAV, gene therapy, synthetic biology, viral vector, molecular evolution, DNA shuffling
Combinatorial Synthesis of and High-throughput Protein Release from Polymer Film and Nanoparticle Libraries
Institutions: Iowa State University.
Polyanhydrides are a class of biomaterials with excellent biocompatibility and drug delivery capabilities. While they have been studied extensively with conventional one-sample-at-a-time synthesis techniques, a more recent high-throughput approach has been developed enabling the synthesis and testing of large libraries of polyanhydrides1
. This will facilitate more efficient optimization and design process of these biomaterials for drug and vaccine delivery applications. The method in this work describes the combinatorial synthesis of biodegradable polyanhydride film and nanoparticle libraries and the high-throughput detection of protein release from these libraries. In this robotically operated method (Figure 1
), linear actuators and syringe pumps are controlled by LabVIEW, which enables a hands-free automated protocol, eliminating user error. Furthermore, this method enables the rapid fabrication of micro-scale polymer libraries, reducing the batch size while resulting in the creation of multivariant polymer systems. This combinatorial approach to polymer synthesis facilitates the synthesis of up to 15 different polymers in an equivalent amount of time it would take to synthesize one polymer conventionally. In addition, the combinatorial polymer library can be fabricated into blank or protein-loaded geometries including films or nanoparticles upon dissolution of the polymer library in a solvent and precipitation into a non-solvent (for nanoparticles) or by vacuum drying (for films). Upon loading a fluorochrome-conjugated protein into the polymer libraries, protein release kinetics can be assessed at high-throughput using a fluorescence-based detection method (Figures 2
) as described previously1
. This combinatorial platform has been validated with conventional methods2
and the polyanhydride film and nanoparticle libraries have been characterized with 1
H NMR and FTIR. The libraries have been screened for protein release kinetics, stability and antigenicity; in vitro
cellular toxicity, cytokine production, surface marker expression, adhesion, proliferation and differentiation; and in vivo
biodistribution and mucoadhesion1-11
. The combinatorial method developed herein enables high-throughput polymer synthesis and fabrication of protein-loaded nanoparticle and film libraries, which can, in turn, be screened in vitro
and in vivo
for optimization of biomaterial performance.
Bioengineering, Issue 67, combinatorial, high-throughput, polymer synthesis, polyanhydrides, nanoparticle fabrication, release kinetics, protein delivery
RNA-seq Analysis of Transcriptomes in Thrombin-treated and Control Human Pulmonary Microvascular Endothelial Cells
Institutions: Children's Mercy Hospital and Clinics, School of Medicine, University of Missouri-Kansas City.
The characterization of gene expression in cells via measurement of mRNA levels is a useful tool in determining how the transcriptional machinery of the cell is affected by external signals (e.g.
drug treatment), or how cells differ between a healthy state and a diseased state. With the advent and continuous refinement of next-generation DNA sequencing technology, RNA-sequencing (RNA-seq) has become an increasingly popular method of transcriptome analysis to catalog all species of transcripts, to determine the transcriptional structure of all expressed genes and to quantify the changing expression levels of the total set of transcripts in a given cell, tissue or organism1,2
. RNA-seq is gradually replacing DNA microarrays as a preferred method for transcriptome analysis because it has the advantages of profiling a complete transcriptome, providing a digital type datum (copy number of any transcript) and not relying on any known genomic sequence3
Here, we present a complete and detailed protocol to apply RNA-seq to profile transcriptomes in human pulmonary microvascular endothelial cells with or without thrombin treatment. This protocol is based on our recent published study entitled "RNA-seq Reveals Novel Transcriptome of Genes and Their Isoforms in Human Pulmonary Microvascular Endothelial Cells Treated with Thrombin,"4
in which we successfully performed the first complete transcriptome analysis of human pulmonary microvascular endothelial cells treated with thrombin using RNA-seq. It yielded unprecedented resources for further experimentation to gain insights into molecular mechanisms underlying thrombin-mediated endothelial dysfunction in the pathogenesis of inflammatory conditions, cancer, diabetes, and coronary heart disease, and provides potential new leads for therapeutic targets to those diseases.
The descriptive text of this protocol is divided into four parts. The first part describes the treatment of human pulmonary microvascular endothelial cells with thrombin and RNA isolation, quality analysis and quantification. The second part describes library construction and sequencing. The third part describes the data analysis. The fourth part describes an RT-PCR validation assay. Representative results of several key steps are displayed. Useful tips or precautions to boost success in key steps are provided in the Discussion section. Although this protocol uses human pulmonary microvascular endothelial cells treated with thrombin, it can be generalized to profile transcriptomes in both mammalian and non-mammalian cells and in tissues treated with different stimuli or inhibitors, or to compare transcriptomes in cells or tissues between a healthy state and a disease state.
Genetics, Issue 72, Molecular Biology, Immunology, Medicine, Genomics, Proteins, RNA-seq, Next Generation DNA Sequencing, Transcriptome, Transcription, Thrombin, Endothelial cells, high-throughput, DNA, genomic DNA, RT-PCR, PCR
Reconstitution of a Kv Channel into Lipid Membranes for Structural and Functional Studies
Institutions: University of Texas Southwestern Medical Center at Dallas.
To study the lipid-protein interaction in a reductionistic fashion, it is necessary to incorporate the membrane proteins into membranes of well-defined lipid composition. We are studying the lipid-dependent gating effects in a prototype voltage-gated potassium (Kv) channel, and have worked out detailed procedures to reconstitute the channels into different membrane systems. Our reconstitution procedures take consideration of both detergent-induced fusion of vesicles and the fusion of protein/detergent micelles with the lipid/detergent mixed micelles as well as the importance of reaching an equilibrium distribution of lipids among the protein/detergent/lipid and the detergent/lipid mixed micelles. Our data suggested that the insertion of the channels in the lipid vesicles is relatively random in orientations, and the reconstitution efficiency is so high that no detectable protein aggregates were seen in fractionation experiments. We have utilized the reconstituted channels to determine the conformational states of the channels in different lipids, record electrical activities of a small number of channels incorporated in planar lipid bilayers, screen for conformation-specific ligands from a phage-displayed peptide library, and support the growth of 2D crystals of the channels in membranes. The reconstitution procedures described here may be adapted for studying other membrane proteins in lipid bilayers, especially for the investigation of the lipid effects on the eukaryotic voltage-gated ion channels.
Molecular Biology, Issue 77, Biochemistry, Genetics, Cellular Biology, Structural Biology, Biophysics, Membrane Lipids, Phospholipids, Carrier Proteins, Membrane Proteins, Micelles, Molecular Motor Proteins, life sciences, biochemistry, Amino Acids, Peptides, and Proteins, lipid-protein interaction, channel reconstitution, lipid-dependent gating, voltage-gated ion channel, conformation-specific ligands, lipids
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples
Institutions: The University of British Columbia, Coastal Genomics, British Columbia Public Health Microbiology and Reference Laboratory.
Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments.
In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.
Environmental Sciences, Issue 98, environmental samples, next-generation sequencing, DNA libraries, Ranger Technology, gel size selection, metagenomics, water samples