JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Synthesis of a pseudo-disaccharide library and its application to the characterisation of the heparanase catalytic site.
PLoS ONE
PUBLISHED: 01-01-2013
A novel methodology is described for the efficient and divergent synthesis of pseudodisaccharides, molecules comprising of amino carbasugar analogues linked to natural sugars. The methodology is general and enables the introduction of diversity both at the carbasugar and the natural sugar components of the pseudodisaccharides. Using this approach, a series of pseudodisaccharides are synthesised that mimic the repeating backbone unit of heparan sulfate, and are tested for inhibition of heparanase, a disease-relevant enzyme that hydrolyses heparan sulfate. A new homology model of human heparanase is described based on a family 79 ?-glucuronidase. This model is used to postulate a computational rationale for the observed activity of the different pseudodisaccharides and provide valuable information that informs the design of potential inhibitors of this enzyme.
Authors: Marcel Hollenstein, Christine Catherine Smith, Michael Räz.
Published: 04-03-2014
ABSTRACT
The traditional strategy for the introduction of chemical functionalities is the use of solid-phase synthesis by appending suitably modified phosphoramidite precursors to the nascent chain. However, the conditions used during the synthesis and the restriction to rather short sequences hamper the applicability of this methodology. On the other hand, modified nucleoside triphosphates are activated building blocks that have been employed for the mild introduction of numerous functional groups into nucleic acids, a strategy that paves the way for the use of modified nucleic acids in a wide-ranging palette of practical applications such as functional tagging and generation of ribozymes and DNAzymes. One of the major challenges resides in the intricacy of the methodology leading to the isolation and characterization of these nucleoside analogues. In this video article, we present a detailed protocol for the synthesis of these modified analogues using phosphorous(III)-based reagents. In addition, the procedure for their biochemical characterization is divulged, with a special emphasis on primer extension reactions and TdT tailing polymerization. This detailed protocol will be of use for the crafting of modified dNTPs and their further use in chemical biology.
25 Related JoVE Articles!
Play Button
Genome Editing with CompoZr Custom Zinc Finger Nucleases (ZFNs)
Authors: Keith Hansen, Matthew J. Coussens, Jack Sago, Shilpi Subramanian, Monika Gjoka, Dave Briner.
Institutions: Sigma Life Science.
Genome editing is a powerful technique that can be used to elucidate gene function and the genetic basis of disease. Traditional gene editing methods such as chemical-based mutagenesis or random integration of DNA sequences confer indiscriminate genetic changes in an overall inefficient manner and require incorporation of undesirable synthetic sequences or use of aberrant culture conditions, potentially confusing biological study. By contrast, transient ZFN expression in a cell can facilitate precise, heritable gene editing in a highly efficient manner without the need for administration of chemicals or integration of synthetic transgenes. Zinc finger nucleases (ZFNs) are enzymes which bind and cut distinct sequences of double-stranded DNA (dsDNA). A functional CompoZr ZFN unit consists of two individual monomeric proteins that bind a DNA "half-site" of approximately 15-18 nucleotides (see Figure 1). When two ZFN monomers "home" to their adjacent target sites the DNA-cleavage domains dimerize and create a double-strand break (DSB) in the DNA.1 Introduction of ZFN-mediated DSBs in the genome lays a foundation for highly efficient genome editing. Imperfect repair of DSBs in a cell via the non-homologous end-joining (NHEJ) DNA repair pathway can result in small insertions and deletions (indels). Creation of indels within the gene coding sequence of a cell can result in frameshift and subsequent functional knockout of a gene locus at high efficiency.2 While this protocol describes the use of ZFNs to create a gene knockout, integration of transgenes may also be conducted via homology-directed repair at the ZFN cut site. The CompoZr Custom ZFN Service represents a systematic, comprehensive, and well-characterized approach to targeted gene editing for the scientific community with ZFN technology. Sigma scientists work closely with investigators to 1) perform due diligence analysis including analysis of relevant gene structure, biology, and model system pursuant to the project goals, 2) apply this knowledge to develop a sound targeting strategy, 3) then design, build, and functionally validate ZFNs for activity in a relevant cell line. The investigator receives positive control genomic DNA and primers, and ready-to-use ZFN reagents supplied in both plasmid DNA and in-vitro transcribed mRNA format. These reagents may then be delivered for transient expression in the investigator’s cell line or cell type of choice. Samples are then tested for gene editing at the locus of interest by standard molecular biology techniques including PCR amplification, enzymatic digest, and electrophoresis. After positive signal for gene editing is detected in the initial population, cells are single-cell cloned and genotyped for identification of mutant clones/alleles.
Genetics, Issue 64, Molecular Biology, Zinc Finger Nuclease, Genome Engineering, Genomic Editing, Gene Modification, Gene Knockout, Gene Integration, non-homologous end joining, homologous recombination, targeted genome editing
3304
Play Button
Identifying Targets of Human microRNAs with the LightSwitch Luciferase Assay System using 3'UTR-reporter Constructs and a microRNA Mimic in Adherent Cells
Authors: Shelley Force Aldred, Patrick Collins, Nathan Trinklein.
Institutions: SwitchGear Genomics.
MicroRNAs (miRNAs) are important regulators of gene expression and play a role in many biological processes. More than 700 human miRNAs have been identified so far with each having up to hundreds of unique target mRNAs. Computational tools, expression and proteomics assays, and chromatin-immunoprecipitation-based techniques provide important clues for identifying mRNAs that are direct targets of a particular miRNA. In addition, 3'UTR-reporter assays have become an important component of thorough miRNA target studies because they provide functional evidence for and quantitate the effects of specific miRNA-3'UTR interactions in a cell-based system. To enable more researchers to leverage 3'UTR-reporter assays and to support the scale-up of such assays to high-throughput levels, we have created a genome-wide collection of human 3'UTR luciferase reporters in the highly-optimized LightSwitch Luciferase Assay System. The system also includes synthetic miRNA target reporter constructs for use as positive controls, various endogenous 3'UTR reporter constructs, and a series of standardized experimental protocols. Here we describe a method for co-transfection of individual 3'UTR-reporter constructs along with a miRNA mimic that is efficient, reproducible, and amenable to high-throughput analysis.
Genetics, Issue 55, MicroRNA, miRNA, mimic, Clone, 3' UTR, Assay, vector, LightSwitch, luciferase, co-transfection, 3'UTR REPORTER, mirna target, microrna target, reporter, GoClone, Reporter construct
3343
Play Button
Generation of RNA/DNA Hybrids in Genomic DNA by Transformation using RNA-containing Oligonucleotides
Authors: Ying Shen, Francesca Storici.
Institutions: Georgia Institute of Technology.
Synthetic short nucleic acid polymers, oligonucleotides (oligos), are the most functional and widespread tools of molecular biology. Oligos can be produced to contain any desired DNA or RNA sequence and can be prepared to include a wide variety of base and sugar modifications. Moreover, oligos can be designed to mimic specific nucleic acid alterations and thus, can serve as important tools to investigate effects of DNA damage and mechanisms of repair. We found that Thermo Scientific Dharmacon RNA-containing oligos with a length between 50 and 80 nucleotides can be particularly suitable to study, in vivo, functions and consequences of chromosomal RNA/DNA hybrids and of ribonucleotides embedded into DNA. RNA/DNA hybrids can readily form during DNA replication, repair and transcription, however, very little is known about the stability of RNA/DNA hybrids in cells and to which extent these hybrids can affect the genetic integrity of cells. RNA-containing oligos, therefore, represent a perfect vector to introduce ribonucleotides into chromosomal DNA and generate RNA/DNA hybrids of chosen length and base composition. Here we present the protocol for the incorporation of ribonucleotides into the genome of the eukaryotic model system yeast /Saccharomyces cerevisiae/. Yet, our lab has utilized Thermo Scientific Dharmacon RNA-containing oligos to generate RNA/DNA hybrids at the chromosomal level in different cell systems, from bacteria to human cells.
Cellular Biology, Issue 45, RNA-containing oligonucleotides, ribonucleotides, RNA/DNA hybrids, yeast, transformation, gene targeting, genome instability, DNA repair
2152
Play Button
GENPLAT: an Automated Platform for Biomass Enzyme Discovery and Cocktail Optimization
Authors: Jonathan Walton, Goutami Banerjee, Suzana Car.
Institutions: Michigan State University, Michigan State University.
The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such as T. reesei. Proteins can also be purified from commercial enzyme cocktails (e.g., Multifect Xylanase, Novozyme 188). An increasing number of pure enzymes, including glycosyl hydrolases, cell wall-active esterases, proteases, and lyases, are available from commercial sources, e.g., Megazyme, Inc. (www.megazyme.com), NZYTech (www.nzytech.com), and PROZOMIX (www.prozomix.com). Design-Expert software (Stat-Ease, Inc.) is used to create simplex-lattice designs and to analyze responses (in this case, Glc and Xyl release). Mixtures contain 4-20 components, which can vary in proportion between 0 and 100%. Assay points typically include the extreme vertices with a sufficient number of intervening points to generate a valid model. In the terminology of experimental design, most of our studies are "mixture" experiments, meaning that the sum of all components adds to a total fixed protein loading (expressed as mg/g glucan). The number of mixtures in the simplex-lattice depends on both the number of components in the mixture and the degree of polynomial (quadratic or cubic). For example, a 6-component experiment will entail 63 separate reactions with an augmented special cubic model, which can detect three-way interactions, whereas only 23 individual reactions are necessary with an augmented quadratic model. For mixtures containing more than eight components, a quadratic experimental design is more practical, and in our experience such models are usually statistically valid. All enzyme loadings are expressed as a percentage of the final total loading (which for our experiments is typically 15 mg protein/g glucan). For "core" enzymes, the lower percentage limit is set to 5%. This limit was derived from our experience in which yields of Glc and/or Xyl were very low if any core enzyme was present at 0%. Poor models result from too many samples showing very low Glc or Xyl yields. Setting a lower limit in turn determines an upper limit. That is, for a six-component experiment, if the lower limit for each single component is set to 5%, then the upper limit of each single component will be 75%. The lower limits of all other enzymes considered as "accessory" are set to 0%. "Core" and "accessory" are somewhat arbitrary designations and will differ depending on the substrate, but in our studies the core enzymes for release of Glc from corn stover comprise the following enzymes from T. reesei: CBH1 (also known as Cel7A), CBH2 (Cel6A), EG1(Cel7B), BG (β-glucosidase), EX3 (endo-β1,4-xylanase, GH10), and BX (β-xylosidase).
Bioengineering, Issue 56, cellulase, cellobiohydrolase, glucanase, xylanase, hemicellulase, experimental design, biomass, bioenergy, corn stover, glycosyl hydrolase
3314
Play Button
RNA Secondary Structure Prediction Using High-throughput SHAPE
Authors: Sabrina Lusvarghi, Joanna Sztuba-Solinska, Katarzyna J. Purzycka, Jason W. Rausch, Stuart F.J. Le Grice.
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
50243
Play Button
RNA-seq Analysis of Transcriptomes in Thrombin-treated and Control Human Pulmonary Microvascular Endothelial Cells
Authors: Dilyara Cheranova, Margaret Gibson, Suman Chaudhary, Li Qin Zhang, Daniel P. Heruth, Dmitry N. Grigoryev, Shui Qing Ye.
Institutions: Children's Mercy Hospital and Clinics, School of Medicine, University of Missouri-Kansas City.
The characterization of gene expression in cells via measurement of mRNA levels is a useful tool in determining how the transcriptional machinery of the cell is affected by external signals (e.g. drug treatment), or how cells differ between a healthy state and a diseased state. With the advent and continuous refinement of next-generation DNA sequencing technology, RNA-sequencing (RNA-seq) has become an increasingly popular method of transcriptome analysis to catalog all species of transcripts, to determine the transcriptional structure of all expressed genes and to quantify the changing expression levels of the total set of transcripts in a given cell, tissue or organism1,2 . RNA-seq is gradually replacing DNA microarrays as a preferred method for transcriptome analysis because it has the advantages of profiling a complete transcriptome, providing a digital type datum (copy number of any transcript) and not relying on any known genomic sequence3. Here, we present a complete and detailed protocol to apply RNA-seq to profile transcriptomes in human pulmonary microvascular endothelial cells with or without thrombin treatment. This protocol is based on our recent published study entitled "RNA-seq Reveals Novel Transcriptome of Genes and Their Isoforms in Human Pulmonary Microvascular Endothelial Cells Treated with Thrombin,"4 in which we successfully performed the first complete transcriptome analysis of human pulmonary microvascular endothelial cells treated with thrombin using RNA-seq. It yielded unprecedented resources for further experimentation to gain insights into molecular mechanisms underlying thrombin-mediated endothelial dysfunction in the pathogenesis of inflammatory conditions, cancer, diabetes, and coronary heart disease, and provides potential new leads for therapeutic targets to those diseases. The descriptive text of this protocol is divided into four parts. The first part describes the treatment of human pulmonary microvascular endothelial cells with thrombin and RNA isolation, quality analysis and quantification. The second part describes library construction and sequencing. The third part describes the data analysis. The fourth part describes an RT-PCR validation assay. Representative results of several key steps are displayed. Useful tips or precautions to boost success in key steps are provided in the Discussion section. Although this protocol uses human pulmonary microvascular endothelial cells treated with thrombin, it can be generalized to profile transcriptomes in both mammalian and non-mammalian cells and in tissues treated with different stimuli or inhibitors, or to compare transcriptomes in cells or tissues between a healthy state and a disease state.
Genetics, Issue 72, Molecular Biology, Immunology, Medicine, Genomics, Proteins, RNA-seq, Next Generation DNA Sequencing, Transcriptome, Transcription, Thrombin, Endothelial cells, high-throughput, DNA, genomic DNA, RT-PCR, PCR
4393
Play Button
Massively Parallel Reporter Assays in Cultured Mammalian Cells
Authors: Alexandre Melnikov, Xiaolan Zhang, Peter Rogov, Li Wang, Tarjei S. Mikkelsen.
Institutions: Broad Institute.
The genetic reporter assay is a well-established and powerful tool for dissecting the relationship between DNA sequences and their gene regulatory activities. The potential throughput of this assay has, however, been limited by the need to individually clone and assay the activity of each sequence on interest using protein fluorescence or enzymatic activity as a proxy for regulatory activity. Advances in high-throughput DNA synthesis and sequencing technologies have recently made it possible to overcome these limitations by multiplexing the construction and interrogation of large libraries of reporter constructs. This protocol describes implementation of a Massively Parallel Reporter Assay (MPRA) that allows direct comparison of hundreds of thousands of putative regulatory sequences in a single cell culture dish.
Genetics, Issue 90, gene regulation, transcriptional regulation, sequence-activity mapping, reporter assay, library cloning, transfection, tag sequencing, mammalian cells
51719
Play Button
Specificity Analysis of Protein Lysine Methyltransferases Using SPOT Peptide Arrays
Authors: Srikanth Kudithipudi, Denis Kusevic, Sara Weirich, Albert Jeltsch.
Institutions: Stuttgart University.
Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
Biochemistry, Issue 93, Peptide arrays, solid phase peptide synthesis, SPOT synthesis, protein lysine methyltransferases, substrate specificity profile analysis, lysine methylation
52203
Play Button
Single Read and Paired End mRNA-Seq Illumina Libraries from 10 Nanograms Total RNA
Authors: Srikumar Sengupta, Jennifer M. Bolin, Victor Ruotti, Bao Kim Nguyen, James A. Thomson, Angela L. Elwell, Ron Stewart.
Institutions: Morgridge Institute for Research, University of Wisconsin, University of California.
Whole transcriptome sequencing by mRNA-Seq is now used extensively to perform global gene expression, mutation, allele-specific expression and other genome-wide analyses. mRNA-Seq even opens the gate for gene expression analysis of non-sequenced genomes. mRNA-Seq offers high sensitivity, a large dynamic range and allows measurement of transcript copy numbers in a sample. Illumina’s genome analyzer performs sequencing of a large number (> 107) of relatively short sequence reads (< 150 bp).The "paired end" approach, wherein a single long read is sequenced at both its ends, allows for tracking alternate splice junctions, insertions and deletions, and is useful for de novo transcriptome assembly. One of the major challenges faced by researchers is a limited amount of starting material. For example, in experiments where cells are harvested by laser micro-dissection, available starting total RNA may measure in nanograms. Preparation of mRNA-Seq libraries from such samples have been described1, 2 but involves significant PCR amplification that may introduce bias. Other RNA-Seq library construction procedures with minimal PCR amplification have been published3, 4 but require microgram amounts of starting total RNA. Here we describe a protocol for the Illumina Genome Analyzer II platform for mRNA-Seq sequencing for library preparation that avoids significant PCR amplification and requires only 10 nanograms of total RNA. While this protocol has been described previously and validated for single-end sequencing5, where it was shown to produce directional libraries without introducing significant amplification bias, here we validate it further for use as a paired end protocol. We selectively amplify polyadenylated messenger RNAs from starting total RNA using the T7 based Eberwine linear amplification method, coined "T7LA" (T7 linear amplification). The amplified poly-A mRNAs are fragmented, reverse transcribed and adapter ligated to produce the final sequencing library. For both single read and paired end runs, sequences are mapped to the human transcriptome6 and normalized so that data from multiple runs can be compared. We report the gene expression measurement in units of transcripts per million (TPM), which is a superior measure to RPKM when comparing samples7.
Molecular Biology, Issue 56, Genetics, mRNA-Seq, Illumina-Seq, gene expression profiling, high throughput sequencing
3340
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Split-Ubiquitin Based Membrane Yeast Two-Hybrid (MYTH) System: A Powerful Tool For Identifying Protein-Protein Interactions
Authors: Jamie Snider, Saranya Kittanakom, Jasna Curak, Igor Stagljar.
Institutions: University of Toronto, University of Toronto, University of Toronto.
The fundamental biological and clinical importance of integral membrane proteins prompted the development of a yeast-based system for the high-throughput identification of protein-protein interactions (PPI) for full-length transmembrane proteins. To this end, our lab developed the split-ubiquitin based Membrane Yeast Two-Hybrid (MYTH) system. This technology allows for the sensitive detection of transient and stable protein interactions using Saccharomyces cerevisiae as a host organism. MYTH takes advantage of the observation that ubiquitin can be separated into two stable moieties: the C-terminal half of yeast ubiquitin (Cub) and the N-terminal half of the ubiquitin moiety (Nub). In MYTH, this principle is adapted for use as a 'sensor' of protein-protein interactions. Briefly, the integral membrane bait protein is fused to Cub which is linked to an artificial transcription factor. Prey proteins, either in individual or library format, are fused to the Nub moiety. Protein interaction between the bait and prey leads to reconstitution of the ubiquitin moieties, forming a full-length 'pseudo-ubiquitin' molecule. This molecule is in turn recognized by cytosolic deubiquitinating enzymes, resulting in cleavage of the transcription factor, and subsequent induction of reporter gene expression. The system is highly adaptable, and is particularly well-suited to high-throughput screening. It has been successfully employed to investigate interactions using integral membrane proteins from both yeast and other organisms.
Cellular Biology, Issue 36, protein-protein interaction, membrane, split-ubiquitin, yeast, library screening, Y2H, yeast two-hybrid, MYTH
1698
Play Button
Steady-state, Pre-steady-state, and Single-turnover Kinetic Measurement for DNA Glycosylase Activity
Authors: Akira Sassa, William A. Beard, David D. Shock, Samuel H. Wilson.
Institutions: NIEHS, National Institutes of Health.
Human 8-oxoguanine DNA glycosylase (OGG1) excises the mutagenic oxidative DNA lesion 8-oxo-7,8-dihydroguanine (8-oxoG) from DNA. Kinetic characterization of OGG1 is undertaken to measure the rates of 8-oxoG excision and product release. When the OGG1 concentration is lower than substrate DNA, time courses of product formation are biphasic; a rapid exponential phase (i.e. burst) of product formation is followed by a linear steady-state phase. The initial burst of product formation corresponds to the concentration of enzyme properly engaged on the substrate, and the burst amplitude depends on the concentration of enzyme. The first-order rate constant of the burst corresponds to the intrinsic rate of 8-oxoG excision and the slower steady-state rate measures the rate of product release (product DNA dissociation rate constant, koff). Here, we describe steady-state, pre-steady-state, and single-turnover approaches to isolate and measure specific steps during OGG1 catalytic cycling. A fluorescent labeled lesion-containing oligonucleotide and purified OGG1 are used to facilitate precise kinetic measurements. Since low enzyme concentrations are used to make steady-state measurements, manual mixing of reagents and quenching of the reaction can be performed to ascertain the steady-state rate (koff). Additionally, extrapolation of the steady-state rate to a point on the ordinate at zero time indicates that a burst of product formation occurred during the first turnover (i.e. y-intercept is positive). The first-order rate constant of the exponential burst phase can be measured using a rapid mixing and quenching technique that examines the amount of product formed at short time intervals (<1 sec) before the steady-state phase and corresponds to the rate of 8-oxoG excision (i.e. chemistry). The chemical step can also be measured using a single-turnover approach where catalytic cycling is prevented by saturating substrate DNA with enzyme (E>S). These approaches can measure elementary rate constants that influence the efficiency of removal of a DNA lesion.
Chemistry, Issue 78, Biochemistry, Genetics, Molecular Biology, Microbiology, Structural Biology, Chemical Biology, Eukaryota, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Enzymes and Coenzymes, Life Sciences (General), enzymology, rapid quench-flow, active site titration, steady-state, pre-steady-state, single-turnover, kinetics, base excision repair, DNA glycosylase, 8-oxo-7,8-dihydroguanine, 8-oxoG, sequencing
50695
Play Button
High-throughput Saccharification Assay for Lignocellulosic Materials
Authors: Leonardo D. Gomez, Caragh Whitehead, Philip Roberts, Simon J. McQueen-Mason.
Institutions: University of York.
Polysaccharides that make up plant lignocellulosic biomass can be broken down to produce a range of sugars that subsequently can be used in establishing a biorefinery. These raw materials would constitute a new industrial platform, which is both sustainable and carbon neutral, to replace the current dependency on fossil fuel. The recalcitrance to deconstruction observed in lignocellulosic materials is produced by several intrinsic properties of plant cell walls. Crystalline cellulose is embedded in matrix polysaccharides such as xylans and arabinoxylans, and the whole structure is encased by the phenolic polymer lignin, that is also difficult to digest 1. In order to improve the digestibility of plant materials we need to discover the main bottlenecks for the saccharification of cell walls and also screen mutant and breeding populations to evaluate the variability in saccharification 2. These tasks require a high throughput approach and here we present an analytical platform that can perform saccharification analysis in a 96-well plate format. This platform has been developed to allow the screening of lignocellulose digestibility of large populations from varied plant species. We have scaled down the reaction volumes for gentle pretreatment, partial enzymatic hydrolysis and sugar determination, to allow large numbers to be assessed rapidly in an automated system. This automated platform works with milligram amounts of biomass, performing ball milling under controlled conditions to reduce the plant materials to a standardised particle size in a reproducible manner. Once the samples are ground, the automated formatting robot dispenses specified and recorded amounts of material into the corresponding wells of 96 deep well plate (Figure 1). Normally, we dispense the same material into 4 wells to have 4 replicates for analysis. Once the plates are filled with the plant material in the desired layout, they are manually moved to a liquid handling station (Figure 2). In this station the samples are subjected to a mild pretreatment with either dilute acid or alkaline and incubated at temperatures of up to 90°C. The pretreatment solution is subsequently removed and the samples are rinsed with buffer to return them to a suitable pH for hydrolysis. The samples are then incubated with an enzyme mixture for a variable length of time at 50°C. An aliquot is taken from the hydrolyzate and the reducing sugars are automatically determined by the MBTH colorimetric method.
Molecular Biology, Issue 53, Saccharification, lignocellulose, high-throughput, glycosyl hydrolases, biomass, biofuels
3240
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Split-and-pool Synthesis and Characterization of Peptide Tertiary Amide Library
Authors: Yu Gao, Thomas Kodadek.
Institutions: The Scripps Research Institute.
Peptidomimetics are great sources of protein ligands. The oligomeric nature of these compounds enables us to access large synthetic libraries on solid phase by using combinatorial chemistry. One of the most well studied classes of peptidomimetics is peptoids. Peptoids are easy to synthesize and have been shown to be proteolysis-resistant and cell-permeable. Over the past decade, many useful protein ligands have been identified through screening of peptoid libraries. However, most of the ligands identified from peptoid libraries do not display high affinity, with rare exceptions. This may be due, in part, to the lack of chiral centers and conformational constraints in peptoid molecules. Recently, we described a new synthetic route to access peptide tertiary amides (PTAs). PTAs are a superfamily of peptidomimetics that include but are not limited to peptides, peptoids and N-methylated peptides. With side chains on both α-carbon and main chain nitrogen atoms, the conformation of these molecules are greatly constrained by sterical hindrance and allylic 1,3 strain. (Figure 1) Our study suggests that these PTA molecules are highly structured in solution and can be used to identify protein ligands. We believe that these molecules can be a future source of high-affinity protein ligands. Here we describe the synthetic method combining the power of both split-and-pool and sub-monomer strategies to synthesize a sample one-bead one-compound (OBOC) library of PTAs.
Chemistry, Issue 88, Split-and-pool synthesis, peptide tertiary amide, PTA, peptoid, high-throughput screening, combinatorial library, solid phase, triphosgene (BTC), one-bead one-compound, OBOC
51299
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
50961
Play Button
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Authors: Lisa M. Weatherly, Rachel H. Kennedy, Juyoung Shim, Julie A. Gosse.
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1. Originally published by Naal et al.1, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
50671
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Sequence-specific Labeling of Nucleic Acids and Proteins with Methyltransferases and Cofactor Analogues
Authors: Gisela Maria Hanz, Britta Jung, Anna Giesbertz, Matyas Juhasz, Elmar Weinhold.
Institutions: RWTH Aachen University.
S-Adenosyl-l-methionine (AdoMet or SAM)-dependent methyltransferases (MTase) catalyze the transfer of the activated methyl group from AdoMet to specific positions in DNA, RNA, proteins and small biomolecules. This natural methylation reaction can be expanded to a wide variety of alkylation reactions using synthetic cofactor analogues. Replacement of the reactive sulfonium center of AdoMet with an aziridine ring leads to cofactors which can be coupled with DNA by various DNA MTases. These aziridine cofactors can be equipped with reporter groups at different positions of the adenine moiety and used for Sequence-specific Methyltransferase-Induced Labeling of DNA (SMILing DNA). As a typical example we give a protocol for biotinylation of pBR322 plasmid DNA at the 5’-ATCGAT-3’ sequence with the DNA MTase M.BseCI and the aziridine cofactor 6BAz in one step. Extension of the activated methyl group with unsaturated alkyl groups results in another class of AdoMet analogues which are used for methyltransferase-directed Transfer of Activated Groups (mTAG). Since the extended side chains are activated by the sulfonium center and the unsaturated bond, these cofactors are called double-activated AdoMet analogues. These analogues not only function as cofactors for DNA MTases, like the aziridine cofactors, but also for RNA, protein and small molecule MTases. They are typically used for enzymatic modification of MTase substrates with unique functional groups which are labeled with reporter groups in a second chemical step. This is exemplified in a protocol for fluorescence labeling of histone H3 protein. A small propargyl group is transferred from the cofactor analogue SeAdoYn to the protein by the histone H3 lysine 4 (H3K4) MTase Set7/9 followed by click labeling of the alkynylated histone H3 with TAMRA azide. MTase-mediated labeling with cofactor analogues is an enabling technology for many exciting applications including identification and functional study of MTase substrates as well as DNA genotyping and methylation detection.
Biochemistry, Issue 93, S-adenosyl-l-methionine, AdoMet, SAM, aziridine cofactor, double activated cofactor, methyltransferase, DNA methylation, protein methylation, biotin labeling, fluorescence labeling, SMILing, mTAG
52014
Play Button
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Authors: F. Mark Dunning, Timothy M. Piazza, Füsûn N. Zeytin, Ward C. Tucker.
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
51170
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
Identification and Characterization of Protein Glycosylation using Specific Endo- and Exoglycosidases
Authors: Paula E. Magnelli, Alicia M. Bielik, Ellen P. Guthrie.
Institutions: New England Biolabs.
Glycosylation, the addition of covalently linked sugars, is a major post-translational modification of proteins that can significantly affect processes such as cell adhesion, molecular trafficking, clearance, and signal transduction1-4. In eukaryotes, the most common glycosylation modifications in the secretory pathway are additions at consensus asparagine residues (N-linked); or at serine or threonine residues (O-linked) (Figure 1). Initiation of N-glycan synthesis is highly conserved in eukaryotes, while the end products can vary greatly among different species, tissues, or proteins. Some glycans remain unmodified ("high mannose N-glycans") or are further processed in the Golgi ("complex N-glycans"). Greater diversity is found for O-glycans, which start with a common N-Acetylgalactosamine (GalNAc) residue in animal cells but differ in lower organisms1. The detailed analysis of the glycosylation of proteins is a field unto itself and requires extensive resources and expertise to execute properly. However a variety of available enzymes that remove sugars (glycosidases) makes possible to have a general idea of the glycosylation status of a protein in a standard laboratory setting. Here we illustrate the use of glycosidases for the analysis of a model glycoprotein: recombinant human chorionic gonadotropin beta (hCGβ), which carries two N-glycans and four O-glycans 5. The technique requires only simple instrumentation and typical consumables, and it can be readily adapted to the analysis of multiple glycoprotein samples. Several enzymes can be used in parallel to study a glycoprotein. PNGase F is able to remove almost all types of N-linked glycans6,7. For O-glycans, there is no available enzyme that can cleave an intact oligosaccharide from the protein backbone. Instead, O-glycans are trimmed by exoglycosidases to a short core, which is then easily removed by O-Glycosidase. The Protein Deglycosylation Mix contains PNGase F, O-Glycosidase, Neuraminidase (sialidase), β1-4 Galactosidase, and β-N-Acetylglucosaminidase. It is used to simultaneously remove N-glycans and some O-glycans8 . Finally, the Deglycosylation Mix was supplemented with a mixture of other exoglycosidases (α-N-Acetylgalactosaminidase, α1-2 Fucosidase, α1-3,6 Galactosidase, and β1-3 Galactosidase ), which help remove otherwise resistant monosaccharides that could be present in certain O-glycans. SDS-PAGE/Coomasie blue is used to visualize differences in protein migration before and after glycosidase treatment. In addition, a sugar-specific staining method, ProQ Emerald-300, shows diminished signal as glycans are successively removed. This protocol is designed for the analysis of small amounts of glycoprotein (0.5 to 2 μg), although enzymatic deglycosylation can be scaled up to accommodate larger quantities of protein as needed.
Molecular Biology , Issue 58, Glycoprotein, N-glycan, O-glycan, PNGase F, O-glycosidase, deglycosylation, glycosidase
3749
Play Button
Pyrosequencing: A Simple Method for Accurate Genotyping
Authors: Cristi King, Tiffany Scott-Horton.
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR
630
Play Button
BioMEMS and Cellular Biology: Perspectives and Applications
Authors: Albert Folch.
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
300
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.