Genomes are organized into three-dimensional structures, adopting higher-order conformations inside the micron-sized nuclear spaces 7, 2, 12. Such architectures are not random and involve interactions between gene promoters and regulatory elements 13. The binding of transcription factors to specific regulatory sequences brings about a network of transcription regulation and coordination 1, 14.
Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET) was developed to identify these higher-order chromatin structures 5,6. Cells are fixed and interacting loci are captured by covalent DNA-protein cross-links. To minimize non-specific noise and reduce complexity, as well as to increase the specificity of the chromatin interaction analysis, chromatin immunoprecipitation (ChIP) is used against specific protein factors to enrich chromatin fragments of interest before proximity ligation. Ligation involving half-linkers subsequently forms covalent links between pairs of DNA fragments tethered together within individual chromatin complexes. The flanking MmeI restriction enzyme sites in the half-linkers allow extraction of paired end tag-linker-tag constructs (PETs) upon MmeI digestion. As the half-linkers are biotinylated, these PET constructs are purified using streptavidin-magnetic beads. The purified PETs are ligated with next-generation sequencing adaptors and a catalog of interacting fragments is generated via next-generation sequencers such as the Illumina Genome Analyzer. Mapping and bioinformatics analysis is then performed to identify ChIP-enriched binding sites and ChIP-enriched chromatin interactions 8.
We have produced a video to demonstrate critical aspects of the ChIA-PET protocol, especially the preparation of ChIP as the quality of ChIP plays a major role in the outcome of a ChIA-PET library. As the protocols are very long, only the critical steps are shown in the video.
23 Related JoVE Articles!
Measuring Attentional Biases for Threat in Children and Adults
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g.,
snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g.,
flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Infinium Assay for Large-scale SNP Genotyping Applications
Institutions: Oklahoma Medical Research Foundation.
Genotyping variants in the human genome has proven to be an efficient method to identify genetic associations with phenotypes. The distribution of variants within families or populations can facilitate identification of the genetic factors of disease. Illumina's panel of genotyping BeadChips allows investigators to genotype thousands or millions of single nucleotide polymorphisms (SNPs) or to analyze other genomic variants, such as copy number, across a large number of DNA samples. These SNPs can be spread throughout the genome or targeted in specific regions in order to maximize potential discovery. The Infinium assay has been optimized to yield high-quality, accurate results quickly. With proper setup, a single technician can process from a few hundred to over a thousand DNA samples per week, depending on the type of array. This assay guides users through every step, starting with genomic DNA and ending with the scanning of the array. Using propriety reagents, samples are amplified, fragmented, precipitated, resuspended, hybridized to the chip, extended by a single base, stained, and scanned on either an iScan or Hi Scan high-resolution optical imaging system. One overnight step is required to amplify the DNA. The DNA is denatured and isothermally amplified by whole-genome amplification; therefore, no PCR is required. Samples are hybridized to the arrays during a second overnight step. By the third day, the samples are ready to be scanned and analyzed. Amplified DNA may be stockpiled in large quantities, allowing bead arrays to be processed every day of the week, thereby maximizing throughput.
Basic Protocol, Issue 81, genomics, SNP, Genotyping, Infinium, iScan, HiScan, Illumina
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e.
C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample).
Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e.
colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ
soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
BEST: Barcode Enabled Sequencing of Tetrads
Institutions: Pacific Northwest Diabetes Research Institute.
Tetrad analysis is a valuable tool for yeast genetics, but the laborious manual nature of the process has hindered its application on large scales. Barcode Enabled Sequencing of Tetrads (BEST)1
replaces the manual processes of isolating, disrupting and spacing tetrads. BEST isolates tetrads by virtue of a sporulation-specific GFP fusion protein that permits fluorescence-activated cell sorting of tetrads directly onto agar plates, where the ascus is enzymatically digested and the spores are disrupted and randomly arrayed by glass bead plating. The haploid colonies are then assigned sister spore relationships, i.e.
information about which spores originated from the same tetrad, using molecular barcodes read during genotyping. By removing the bottleneck of manual dissection, hundreds or even thousands of tetrads can be isolated in minutes. Here we present a detailed description of the experimental procedures required to perform BEST in the yeast Saccharomyces cerevisiae
, starting with a heterozygous diploid strain through the isolation of colonies derived from the haploid meiotic progeny.
Genetics, Issue 87, Yeast, Tetrad, Genetics, DNA sequencing
Nanomanipulation of Single RNA Molecules by Optical Tweezers
Institutions: University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York.
A large portion of the human genome is transcribed but not translated. In this post genomic era, regulatory functions of RNA have been shown to be increasingly important. As RNA function often depends on its ability to adopt alternative structures, it is difficult to predict RNA three-dimensional structures directly from sequence. Single-molecule approaches show potentials to solve the problem of RNA structural polymorphism by monitoring molecular structures one molecule at a time. This work presents a method to precisely manipulate the folding and structure of single RNA molecules using optical tweezers. First, methods to synthesize molecules suitable for single-molecule mechanical work are described. Next, various calibration procedures to ensure the proper operations of the optical tweezers are discussed. Next, various experiments are explained. To demonstrate the utility of the technique, results of mechanically unfolding RNA hairpins and a single RNA kissing complex are used as evidence. In these examples, the nanomanipulation technique was used to study folding of each structural domain, including secondary and tertiary, independently. Lastly, the limitations and future applications of the method are discussed.
Bioengineering, Issue 90, RNA folding, single-molecule, optical tweezers, nanomanipulation, RNA secondary structure, RNA tertiary structure
Combined DNA-RNA Fluorescent In situ Hybridization (FISH) to Study X Chromosome Inactivation in Differentiated Female Mouse Embryonic Stem Cells
Institutions: Erasmus MC - University Medical Center.
Fluorescent in situ
hybridization (FISH) is a molecular technique which enables the detection of nucleic acids in cells. DNA FISH is often used in cytogenetics and cancer diagnostics, and can detect aberrations of the genome, which often has important clinical implications. RNA FISH can be used to detect RNA molecules in cells and has provided important insights in regulation of gene expression. Combining DNA and RNA FISH within the same cell is technically challenging, as conditions suitable for DNA FISH might be too harsh for fragile, single stranded RNA molecules. We here present an easily applicable protocol which enables the combined, simultaneous detection of Xist
RNA and DNA encoded by the X chromosomes. This combined DNA-RNA FISH protocol can likely be applied to other systems where both RNA and DNA need to be detected.
Biochemistry, Issue 88, Fluorescent in situ hybridization (FISH), combined DNA-RNA FISH, ES cell, cytogenetics, single cell analysis, X chromosome inactivation (XCI), Xist, Bacterial artificial chromosome (BAC), DNA-probe, Rnf12
Next-generation Sequencing of 16S Ribosomal RNA Gene Amplicons
Institutions: National Research Council Canada.
One of the major questions in microbial ecology is “who is there?” This question can be answered using various tools, but one of the long-lasting gold standards is to sequence 16S ribosomal RNA (rRNA) gene amplicons generated by domain-level PCR reactions amplifying from genomic DNA. Traditionally, this was performed by cloning and Sanger (capillary electrophoresis) sequencing of PCR amplicons. The advent of next-generation sequencing has tremendously simplified and increased the sequencing depth for 16S rRNA gene sequencing. The introduction of benchtop sequencers now allows small labs to perform their 16S rRNA sequencing in-house in a matter of days. Here, an approach for 16S rRNA gene amplicon sequencing using a benchtop next-generation sequencer is detailed. The environmental DNA is first amplified by PCR using primers that contain sequencing adapters and barcodes. They are then coupled to spherical particles via emulsion PCR. The particles are loaded on a disposable chip and the chip is inserted in the sequencing machine after which the sequencing is performed. The sequences are retrieved in fastq format, filtered and the barcodes are used to establish the sample membership of the reads. The filtered and binned reads are then further analyzed using publically available tools. An example analysis where the reads were classified with a taxonomy-finding algorithm within the software package Mothur is given. The method outlined here is simple, inexpensive and straightforward and should help smaller labs to take advantage from the ongoing genomic revolution.
Molecular Biology, Issue 90, Metagenomics, Bacteria, 16S ribosomal RNA gene, Amplicon sequencing, Next-generation sequencing, benchtop sequencers
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Institutions: Rensselaer Polytechnic Institute.
has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae
has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
CometChip: A High-throughput 96-Well Platform for Measuring DNA Damage in Microarrayed Human Cells
Institutions: Massachusetts Institute of Technology, Chulabhorn Graduate Institute, University of Minnesota.
DNA damaging agents can promote aging, disease and cancer and they are ubiquitous in the environment and produced within human cells as normal cellular metabolites. Ironically, at high doses DNA damaging agents are also used to treat cancer. The ability to quantify DNA damage responses is thus critical in the public health, pharmaceutical and clinical domains. Here, we describe a novel platform that exploits microfabrication techniques to pattern cells in a fixed microarray. The ‘CometChip’ is based upon the well-established single cell gel electrophoresis assay (a.k.a. the comet assay), which estimates the level of DNA damage by evaluating the extent of DNA migration through a matrix in an electrical field. The type of damage measured by this assay includes abasic sites, crosslinks, and strand breaks. Instead of being randomly dispersed in agarose in the traditional assay, cells are captured into an agarose microwell array by gravity. The platform also expands from the size of a standard microscope slide to a 96-well format, enabling parallel processing. Here we describe the protocols of using the chip to evaluate DNA damage caused by known genotoxic agents and the cellular repair response followed after exposure. Through the integration of biological and engineering principles, this method potentiates robust and sensitive measurements of DNA damage in human cells and provides the necessary throughput for genotoxicity testing, drug development, epidemiological studies and clinical assays.
Bioengineering, Issue 92, comet assay, electrophoresis, microarray, DNA damage, DNA repair
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Mutagenesis and Functional Selection Protocols for Directed Evolution of Proteins in E. coli
Institutions: University of California Santa Cruz - UCSC.
The efficient generation of genetic diversity represents an invaluable molecular tool that can be used to label DNA synthesis, to create unique molecular signatures, or to evolve proteins in the laboratory. Here, we present a protocol that allows the generation of large (>1011
) mutant libraries for a given target sequence. This method is based on replication of a ColE1 plasmid encoding the desired sequence by a low-fidelity variant of DNA polymerase I (LF-Pol I). The target plasmid is transformed into a mutator strain of E. coli
and plated on solid media, yielding between 0.2 and 1 mutations/kb, depending on the location of the target gene. Higher mutation frequencies are achieved by iterating this process of mutagenesis. Compared to alternative methods of mutagenesis, our protocol stands out for its simplicity, as no cloning or PCR are involved. Thus, our method is ideal for mutational labeling of plasmids or other Pol I templates or to explore large sections of sequence space for the evolution of activities not present in the original target. The tight spatial control that PCR or randomized oligonucleotide-based methods offer can also be achieved through subsequent cloning of specific sections of the library. Here we provide protocols showing how to create a random mutant library and how to establish drug-based selections in E. coli
to identify mutants exhibiting new biochemical activities.
Genetics, Issue 49, random mutagenesis, directed evolution, LB agar drug gradient, bacterial complementation, ColE1 plasmid, DNA polymerase I, replication fidelity, genetic adaptation, antimicrobials, methylating agents
iCLIP - Transcriptome-wide Mapping of Protein-RNA Interactions with Individual Nucleotide Resolution
Institutions: Medical Research Council - MRC, EMBL Heidelberg, University of Ljubljana, Wellcome Trust Sanger Institute.
The unique composition and spatial arrangement of RNA-binding proteins (RBPs) on a transcript guide the diverse aspects of post-transcriptional regulation1
. Therefore, an essential step towards understanding transcript regulation at the molecular level is to gain positional information on the binding sites of RBPs2
Protein-RNA interactions can be studied using biochemical methods, but these approaches do not address RNA binding in its native cellular context. Initial attempts to study protein-RNA complexes in their cellular environment employed affinity purification or immunoprecipitation combined with differential display or microarray analysis (RIP-CHIP)3-5
. These approaches were prone to identifying indirect or non-physiological interactions6
. In order to increase the specificity and positional resolution, a strategy referred to as CLIP (UV cross-linking and immunoprecipitation) was introduced7,8
. CLIP combines UV cross-linking of proteins and RNA molecules with rigorous purification schemes including denaturing polyacrylamide gel electrophoresis. In combination with high-throughput sequencing technologies, CLIP has proven as a powerful tool to study protein-RNA interactions on a genome-wide scale (referred to as HITS-CLIP or CLIP-seq)9,10
. Recently, PAR-CLIP was introduced that uses photoreactive ribonucleoside analogs for cross-linking11,12
Despite the high specificity of the obtained data, CLIP experiments often generate cDNA libraries of limited sequence complexity. This is partly due to the restricted amount of co-purified RNA and the two inefficient RNA ligation reactions required for library preparation. In addition, primer extension assays indicated that many cDNAs truncate prematurely at the crosslinked nucleotide13
. Such truncated cDNAs are lost during the standard CLIP library preparation protocol. We recently developed iCLIP (individual-nucleotide resolution CLIP), which captures the truncated cDNAs by replacing one of the inefficient intermolecular RNA ligation steps with a more efficient intramolecular cDNA circularization (Figure 1)14
. Importantly, sequencing the truncated cDNAs provides insights into the position of the cross-link site at nucleotide resolution. We successfully applied iCLIP to study hnRNP C particle organization on a genome-wide scale and assess its role in splicing regulation14
Cellular Biology, Issue 50, RNA biochemistry, transcriptome, systems biology, RNA-binding protein
Competitive Genomic Screens of Barcoded Yeast Libraries
Institutions: University of Toronto, University of Toronto, University of Toronto, National Human Genome Research Institute, NIH, Stanford University , University of Toronto.
By virtue of advances in next generation sequencing technologies, we have access to new genome sequences almost daily. The tempo of these advances is accelerating, promising greater depth and breadth. In light of these extraordinary advances, the need for fast, parallel methods to define gene function becomes ever more important. Collections of genome-wide deletion mutants in yeasts and E. coli
have served as workhorses for functional characterization of gene function, but this approach is not scalable, current gene-deletion approaches require each of the thousands of genes that comprise a genome to be deleted and verified. Only after this work is complete can we pursue high-throughput phenotyping. Over the past decade, our laboratory has refined a portfolio of competitive, miniaturized, high-throughput genome-wide assays that can be performed in parallel. This parallelization is possible because of the inclusion of DNA 'tags', or 'barcodes,' into each mutant, with the barcode serving as a proxy for the mutation and one can measure the barcode abundance to assess mutant fitness. In this study, we seek to fill the gap between DNA sequence and barcoded mutant collections. To accomplish this we introduce a combined transposon disruption-barcoding approach that opens up parallel barcode assays to newly sequenced, but poorly characterized microbes. To illustrate this approach we present a new Candida albicans
barcoded disruption collection and describe how both microarray-based and next generation sequencing-based platforms can be used to collect 10,000 - 1,000,000 gene-gene and drug-gene interactions in a single experiment.
Biochemistry, Issue 54, chemical biology, chemogenomics, chemical probes, barcode microarray, next generation sequencing
Experimental Manipulation of Body Size to Estimate Morphological Scaling Relationships in Drosophila
Institutions: University of Houston, Michigan State University.
The scaling of body parts is a central feature of animal morphology1-7
. Within species, morphological traits need to be correctly proportioned to the body for the organism to function; larger individuals typically have larger body parts and smaller individuals generally have smaller body parts, such that overall body shape is maintained across a range of adult body sizes. The requirement for correct proportions means that individuals within species usually exhibit low variation in relative trait size. In contrast, relative trait size can vary dramatically among species and is a primary mechanism by which morphological diversity is produced. Over a century of comparative work has established these intra- and interspecific patterns3,4
Perhaps the most widely used approach to describe this variation is to calculate the scaling relationship between the size of two morphological traits using the allometric equation y=bxα, where x and y are the size of the two traits, such as organ and body size8,9
. This equation describes the within-group (e.g., species, population) scaling relationship between two traits as both vary in size. Log-transformation of this equation produces a simple linear equation, log(y) = log(b) + αlog(x) and log-log plots of the size of different traits among individuals of the same species typically reveal linear scaling with an intercept of log(b) and a slope of α, called the 'allometric coefficient'9,10
. Morphological variation among groups is described by differences in scaling relationship intercepts or slopes for a given trait pair. Consequently, variation in the parameters of the allometric equation (b and α) elegantly describes the shape variation captured in the relationship between organ and body size within and among biological groups (see 11,12
Not all traits scale linearly with each other or with body size (e.g., 13,14
) Hence, morphological scaling relationships are most informative when the data are taken from the full range of trait sizes. Here we describe how simple experimental manipulation of diet can be used to produce the full range of body size in insects. This permits an estimation of the full scaling relationship for any given pair of traits, allowing a complete description of how shape covaries with size and a robust comparison of scaling relationship parameters among biological groups. Although we focus on Drosophila
, our methodology should be applicable to nearly any fully metamorphic insect.
Developmental Biology, Issue 56, Drosophila, allometry, morphology, body size, scaling, insect
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Biochemical and High Throughput Microscopic Assessment of Fat Mass in Caenorhabditis Elegans
Institutions: Massachusetts General Hospital and Harvard Medical School, Massachusetts Institute of Technology.
The nematode C. elegans
has emerged as an important model for the study of conserved genetic pathways regulating fat metabolism as it relates to human obesity and its associated pathologies. Several previous methodologies developed for the visualization of C. elegans
triglyceride-rich fat stores have proven to be erroneous, highlighting cellular compartments other than lipid droplets. Other methods require specialized equipment, are time-consuming, or yield inconsistent results. We introduce a rapid, reproducible, fixative-based Nile red staining method for the accurate and rapid detection of neutral lipid droplets in C. elegans
. A short fixation step in 40% isopropanol makes animals completely permeable to Nile red, which is then used to stain animals. Spectral properties of this lipophilic dye allow it to strongly and selectively fluoresce in the yellow-green spectrum only when in a lipid-rich environment, but not in more polar environments. Thus, lipid droplets can be visualized on a fluorescent microscope equipped with simple GFP imaging capability after only a brief Nile red staining step in isopropanol. The speed, affordability, and reproducibility of this protocol make it ideally suited for high throughput screens. We also demonstrate a paired method for the biochemical determination of triglycerides and phospholipids using gas chromatography mass-spectrometry. This more rigorous protocol should be used as confirmation of results obtained from the Nile red microscopic lipid determination. We anticipate that these techniques will become new standards in the field of C. elegans
Genetics, Issue 73, Biochemistry, Cellular Biology, Molecular Biology, Developmental Biology, Physiology, Anatomy, Caenorhabditis elegans, Obesity, Energy Metabolism, Lipid Metabolism, C. elegans, fluorescent lipid staining, lipids, Nile red, fat, high throughput screening, obesity, gas chromatography, mass spectrometry, GC/MS, animal model
A Chemical Screening Procedure for Glucocorticoid Signaling with a Zebrafish Larva Luciferase Reporter System
Institutions: Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus South.
Glucocorticoid stress hormones and their artificial derivatives are widely used drugs to treat inflammation, but long-term treatment with glucocorticoids can lead to severe side effects. Test systems are needed to search for novel compounds influencing glucocorticoid signaling in vivo
or to determine unwanted effects of compounds on the glucocorticoid signaling pathway. We have established a transgenic zebrafish assay which allows the measurement of glucocorticoid signaling activity in vivo
and in real-time, the GRIZLY assay (Glucocorticoid Responsive In vivo
Zebrafish Luciferase activitY). The luciferase-based assay detects effects on glucocorticoid signaling with high sensitivity and specificity, including effects by compounds that require metabolization or affect endogenous glucocorticoid production. We present here a detailed protocol for conducting chemical screens with this assay. We describe data acquisition, normalization, and analysis, placing a focus on quality control and data visualization. The assay provides a simple, time-resolved, and quantitative readout. It can be operated as a stand-alone platform, but is also easily integrated into high-throughput screening workflows. It furthermore allows for many applications beyond chemical screening, such as environmental monitoring of endocrine disruptors or stress research.
Developmental Biology, Issue 79, Biochemistry, Vertebrates, Zebrafish, environmental effects (biological and animal), genetics (animal), life sciences, animal biology, animal models, biochemistry, bioengineering (general), Hormones, Hormone Substitutes, and Hormone Antagonists, zebrafish, Danio rerio, chemical screening, luciferase, glucocorticoid, stress, high-throughput screening, receiver operating characteristic curve, in vivo, animal model
Molecular Evolution of the Tre Recombinase
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells.
We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution
Electroporation of Mycobacteria
Institutions: Barts and the London School of Medicine and Dentistry, Barts and the London School of Medicine and Dentistry.
High efficiency transformation is a major limitation in the study of mycobacteria. The genus Mycobacterium can be difficult to transform; this is mainly caused by the thick and waxy cell wall, but is compounded by the fact that most molecular techniques have been developed for distantly-related species such as Escherichia coli and Bacillus subtilis. In spite of these obstacles, mycobacterial plasmids have been identified and DNA transformation of many mycobacterial species have now been described. The most successful method for introducing DNA into mycobacteria is electroporation. Many parameters contribute to successful transformation; these include the species/strain, the nature of the transforming DNA, the selectable marker used, the growth medium, and the conditions for the electroporation pulse. Optimized methods for the transformation of both slow- and fast-grower are detailed here. Transformation efficiencies for different mycobacterial species and with various selectable markers are reported.
Microbiology, Issue 15, Springer Protocols, Mycobacteria, Electroporation, Bacterial Transformation, Transformation Efficiency, Bacteria, Tuberculosis, M. Smegmatis, Springer Protocols
Interview: Glycolipid Antigen Presentation by CD1d and the Therapeutic Potential of NKT cell Activation
Institutions: La Jolla Institute for Allergy and Immunology.
Natural Killer T cells (NKT) are critical determinants of the immune response to cancer, regulation of autioimmune disease, clearance of infectious agents, and the development of artheriosclerotic plaques. In this interview, Mitch Kronenberg discusses his laboratory's efforts to understand the mechanism through which NKT cells are activated by glycolipid antigens. Central to these studies is CD1d - the antigen presenting molecule that presents glycolipids to NKT cells. The advent of CD1d tetramer technology, a technique developed by the Kronenberg lab, is critical for the sorting and identification of subsets of specific glycolipid-reactive T cells. Mitch explains how glycolipid agonists are being used as therapeutic agents to activate NKT cells in cancer patients and how CD1d tetramers can be used to assess the state of the NKT cell population in vivo following glycolipid agonist therapy. Current status of ongoing clinical trials using these agonists are discussed as well as Mitch's prediction for areas in the field of immunology that will have emerging importance in the near future.
Immunology, Issue 10, Natural Killer T cells, NKT cells, CD1 Tetramers, antigen presentation, glycolipid antigens, CD1d, Mucosal Immunity, Translational Research