Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
18 Related JoVE Articles!
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro
. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro
replication of HIV-1 as influenced by the gag
gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag
gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro
replication of chronically derived gag-pro
sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
Nucleoside Triphosphates - From Synthesis to Biochemical Characterization
Institutions: University of Bern.
The traditional strategy for the introduction of chemical functionalities is the use of solid-phase synthesis by appending suitably modified phosphoramidite precursors to the nascent chain. However, the conditions used during the synthesis and the restriction to rather short sequences hamper the applicability of this methodology. On the other hand, modified nucleoside triphosphates are activated building blocks that have been employed for the mild introduction of numerous functional groups into nucleic acids, a strategy that paves the way for the use of modified nucleic acids in a wide-ranging palette of practical applications such as functional tagging and generation of ribozymes and DNAzymes. One of the major challenges resides in the intricacy of the methodology leading to the isolation and characterization of these nucleoside analogues.
In this video article, we present a detailed protocol for the synthesis of these modified analogues using phosphorous(III)-based reagents. In addition, the procedure for their biochemical characterization is divulged, with a special emphasis on primer extension reactions and TdT tailing polymerization. This detailed protocol will be of use for the crafting of modified dNTPs and their further use in chemical biology.
Chemistry, Issue 86, Nucleic acid analogues, Bioorganic Chemistry, PCR, primer extension reactions, organic synthesis, PAGE, HPLC, nucleoside triphosphates
Mouse Genome Engineering Using Designer Nucleases
Institutions: University of Zurich, University of Minnesota.
Transgenic mice carrying site-specific genome modifications (knockout, knock-in) are of vital importance for dissecting complex biological systems as well as for modeling human diseases and testing therapeutic strategies. Recent advances in the use of designer nucleases such as zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and the clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPR-associated (Cas) 9 system for site-specific genome engineering open the possibility to perform rapid targeted genome modification in virtually any laboratory species without the need to rely on embryonic stem (ES) cell technology. A genome editing experiment typically starts with identification of designer nuclease target sites within a gene of interest followed by construction of custom DNA-binding domains to direct nuclease activity to the investigator-defined genomic locus. Designer nuclease plasmids are in vitro
transcribed to generate mRNA for microinjection of fertilized mouse oocytes. Here, we provide a protocol for achieving targeted genome modification by direct injection of TALEN mRNA into fertilized mouse oocytes.
Genetics, Issue 86, Oocyte microinjection, Designer nucleases, ZFN, TALEN, Genome Engineering
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1
H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g.
crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
Structure and Coordination Determination of Peptide-metal Complexes Using 1D and 2D 1H NMR
Institutions: The Hebrew University of Jerusalem, The Hebrew University of Jerusalem.
Copper (I) binding by metallochaperone transport proteins prevents copper oxidation and release of the toxic ions that may participate in harmful redox reactions. The Cu (I) complex of the peptide model of a Cu (I) binding metallochaperone protein, which includes the sequence MTCSGCSRPG (underlined is conserved), was determined in solution under inert conditions by NMR spectroscopy.
NMR is a widely accepted technique for the determination of solution structures of proteins and peptides. Due to difficulty in crystallization to provide single crystals suitable for X-ray crystallography, the NMR technique is extremely valuable, especially as it provides information on the solution state rather than the solid state. Herein we describe all steps that are required for full three-dimensional structure determinations by NMR. The protocol includes sample preparation in an NMR tube, 1D and 2D data collection and processing, peak assignment and integration, molecular mechanics calculations, and structure analysis. Importantly, the analysis was first conducted without any preset metal-ligand bonds, to assure a reliable structure determination in an unbiased manner.
Chemistry, Issue 82, solution structure determination, NMR, peptide models, copper-binding proteins, copper complexes
Steady-state, Pre-steady-state, and Single-turnover Kinetic Measurement for DNA Glycosylase Activity
Institutions: NIEHS, National Institutes of Health.
Human 8-oxoguanine DNA glycosylase (OGG1) excises the mutagenic oxidative DNA lesion 8-oxo-7,8-dihydroguanine (8-oxoG) from DNA. Kinetic characterization of OGG1 is undertaken to measure the rates of 8-oxoG excision and product release. When the OGG1 concentration is lower than substrate DNA, time courses of product formation are biphasic; a rapid exponential phase (i.e.
burst) of product formation is followed by a linear steady-state phase. The initial burst of product formation corresponds to the concentration of enzyme properly engaged on the substrate, and the burst amplitude depends on the concentration of enzyme. The first-order rate constant of the burst corresponds to the intrinsic rate of 8-oxoG excision and the slower steady-state rate measures the rate of product release (product DNA dissociation rate constant, koff
). Here, we describe steady-state, pre-steady-state, and single-turnover approaches to isolate and measure specific steps during OGG1 catalytic cycling. A fluorescent labeled lesion-containing oligonucleotide and purified OGG1 are used to facilitate precise kinetic measurements. Since low enzyme concentrations are used to make steady-state measurements, manual mixing of reagents and quenching of the reaction can be performed to ascertain the steady-state rate (koff
). Additionally, extrapolation of the steady-state rate to a point on the ordinate at zero time indicates that a burst of product formation occurred during the first turnover (i.e.
y-intercept is positive). The first-order rate constant of the exponential burst phase can be measured using a rapid mixing and quenching technique that examines the amount of product formed at short time intervals (<1 sec) before the steady-state phase and corresponds to the rate of 8-oxoG excision (i.e.
chemistry). The chemical step can also be measured using a single-turnover approach where catalytic cycling is prevented by saturating substrate DNA with enzyme (E>S). These approaches can measure elementary rate constants that influence the efficiency of removal of a DNA lesion.
Chemistry, Issue 78, Biochemistry, Genetics, Molecular Biology, Microbiology, Structural Biology, Chemical Biology, Eukaryota, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Enzymes and Coenzymes, Life Sciences (General), enzymology, rapid quench-flow, active site titration, steady-state, pre-steady-state, single-turnover, kinetics, base excision repair, DNA glycosylase, 8-oxo-7,8-dihydroguanine, 8-oxoG, sequencing
RNA Secondary Structure Prediction Using High-throughput SHAPE
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico
RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+
-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+
-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+
transport by Na+
- or gastric H+
-ATPase in single cells. Using Xenopus
oocytes as expression system, we determine the amount of Rb+
) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+
uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g.
to quantitatively determine the 3Na+
transport stoichiometry of the Na+
-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+
-ATPase. In principle, the assay is not limited to K+
-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
The ITS2 Database
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1
and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8
The ITS2 Database9
presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11
. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12
(direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13
. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold.
The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14
search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16
for multiple sequence-structure alignment calculation and Neighbor Joining18
tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure.
In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Isolation of Fidelity Variants of RNA Viruses and Characterization of Virus Mutation Frequency
Institutions: Institut Pasteur .
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3
. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7
Immunology, Issue 52, Polymerase fidelity, RNA virus, mutation frequency, mutagen, RNA polymerase, viral evolution
High Throughput Screening of Fungal Endoglucanase Activity in Escherichia coli
Institutions: California Institute of Technology, California Institute of Technology.
Cellulase enzymes (endoglucanases, cellobiohydrolases, and β-glucosidases) hydrolyze cellulose into component sugars, which in turn can be converted into fuel alcohols1
. The potential for enzymatic hydrolysis of cellulosic biomass to provide renewable energy has intensified efforts to engineer cellulases for economical fuel production2
. Of particular interest are fungal cellulases3-8
, which are already being used industrially for foods and textiles processing.
Identifying active variants among a library of mutant cellulases is critical to the engineering process; active mutants can be further tested for improved properties and/or subjected to additional mutagenesis. Efficient engineering of fungal cellulases has been hampered by a lack of genetic tools for native organisms and by difficulties in expressing the enzymes in heterologous hosts. Recently, Morikawa and coworkers developed a method for expressing in E. coli
the catalytic domains of endoglucanases from H. jecorina3,9
, an important industrial fungus with the capacity to secrete cellulases in large quantities. Functional E. coli
expression has also been reported for cellulases from other fungi, including Macrophomina phaseolina10
and Phanerochaete chrysosporium11-12
We present a method for high throughput screening of fungal endoglucanase activity in E. coli
. (Fig 1
) This method uses the common microbial dye Congo Red (CR) to visualize enzymatic degradation of carboxymethyl cellulose (CMC) by cells growing on solid medium. The activity assay requires inexpensive reagents, minimal manipulation, and gives unambiguous results as zones of degradation (“halos”) at the colony site. Although a quantitative measure of enzymatic activity cannot be determined by this method, we have found that halo size correlates with total enzymatic activity in the cell. Further characterization of individual positive clones will determine , relative protein fitness.
Traditional bacterial whole cell CMC/CR activity assays13
involve pouring agar containing CMC onto colonies, which is subject to cross-contamination, or incubating cultures in CMC agar wells, which is less amenable to large-scale experimentation. Here we report an improved protocol that modifies existing wash methods14
for cellulase activity: cells grown on CMC agar plates are removed prior to CR staining. Our protocol significantly reduces cross-contamination and is highly scalable, allowing the rapid screening of thousands of clones. In addition to H. jecorina enzymes
, we have expressed and screened endoglucanase variants from the Thermoascus aurantiacus
and Penicillium decumbens
(shown in Figure 2
), suggesting that this protocol is applicable to enzymes from a range of organisms.
Molecular Biology, Issue 54, cellulase, endoglucanase, CMC, Congo Red
In vitro Reconstitution of the Active T. castaneum Telomerase
Institutions: University of Pennsylvania.
Efforts to isolate the catalytic subunit of telomerase, TERT, in sufficient quantities for structural studies, have been met with limited success for more than a decade. Here, we present methods for the isolation of the recombinant Tribolium castaneum TERT (Tc
TERT) and the reconstitution of the active T. castaneum
telomerase ribonucleoprotein (RNP) complex in vitro
Telomerase is a specialized reverse transcriptase1
that adds short DNA repeats, called telomeres, to the 3' end of linear chromosomes2
that serve to protect them from end-to-end fusion and degradation. Following DNA replication, a short segment is lost at the end of the chromosome3
and without telomerase, cells continue dividing until eventually reaching their Hayflick Limit4
. Additionally, telomerase is dormant in most somatic cells5
in adults, but is active in cancer cells6
where it promotes cell immortality7
The minimal telomerase enzyme consists of two core components: the protein subunit (TERT), which comprises the catalytic subunit of the enzyme and an integral RNA component (TER), which contains the template TERT uses to synthesize telomeres8,9
. Prior to 2008, only structures for individual telomerase domains had been solved10,11
. A major breakthrough in this field came from the determination of the crystal structure of the active12
, catalytic subunit of T. castaneum
Here, we present methods for producing large quantities of the active, soluble Tc
TERT for structural and biochemical studies, and the reconstitution of the telomerase RNP complex in vitro
for telomerase activity assays. An overview of the experimental methods used is shown in Figure 1.
Molecular Biology, Issue 53, Telomerase, protein expression, purification, chromatography, RNA isolation, TRAP
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Concentration Determination of Nucleic Acids and Proteins Using the Micro-volume Bio-spec Nano Spectrophotometer
Institutions: Scientific Instruments.
Nucleic Acid quantitation procedures have advanced significantly in the last three decades. More and more, molecular biologists require consistent small-volume analysis of nucleic acid samples for their experiments. The BioSpec-nano provides a potential solution to the problems of inaccurate, non-reproducible results, inherent in current DNA quantitation methods, via specialized optics and a sensitive PDA detector. The BioSpec-nano also has automated functionality such that mounting, measurement, and cleaning are done by the instrument, thereby eliminating tedious, repetitive, and inconsistent placement of the fiber optic element and manual cleaning.
In this study, data is presented on the quantification of DNA and protein, as well as on measurement reproducibility and accuracy. Automated sample contact and rapid scanning allows measurement in three seconds, resulting in excellent throughput. Data analysis is carried out using the built-in features of the software. The formula used for calculating DNA concentration is:
Sample Concentration = DF · (OD260-OD320)· NACF (1)
Where DF = sample dilution factor and NACF = nucleic acid concentration factor.
The Nucleic Acid concentration factor is set in accordance with the analyte selected1
Protein concentration results can be expressed as μg/ mL or as moles/L by entering e280 and molecular weight values respectively. When residue values for Tyr, Trp and Cysteine (S-S bond) are entered in the e280Calc tab, the extinction coefficient values are calculated as e280 = 5500 x (Trp residues) + 1490 x (Tyr residues) + 125 x (cysteine S-S bond). The e280 value is used by the software for concentration calculation.
In addition to concentration determination of nucleic acids and protein, the BioSpec-nano can be used as an ultra micro-volume spectrophotometer for many other analytes or as a standard spectrophotometer using 5 mm pathlength cells.
Molecular Biology, Issue 48, Nucleic acid quantitation, protein quantitation, micro-volume analysis, label quantitation