Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
21 Related JoVE Articles!
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Specificity Analysis of Protein Lysine Methyltransferases Using SPOT Peptide Arrays
Institutions: Stuttgart University.
Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
Biochemistry, Issue 93, Peptide arrays, solid phase peptide synthesis, SPOT synthesis, protein lysine methyltransferases, substrate specificity profile analysis, lysine methylation
Using SecM Arrest Sequence as a Tool to Isolate Ribosome Bound Polypeptides
Institutions: Cleveland State University.
Extensive research has provided ample evidences suggesting that protein folding in the cell is a co-translational process1-5
. However, the exact pathway that polypeptide chain follows during co-translational folding to achieve its functional form is still an enigma. In order to understand this process and to determine the exact conformation of the co-translational folding intermediates, it is essential to develop techniques that allow the isolation of RNCs carrying nascent chains of predetermined sizes to allow their further structural analysis.
SecM (secretion monitor) is a 170 amino acid E. coli
protein that regulates expression of the downstream SecA (secretion driving) ATPase in the secM-secA
. Nakatogawa and Ito originally found that a 17 amino acid long sequence (150-FSTPVWISQAQGIRAG
P-166) in the C-terminal region of the SecM protein is sufficient and necessary to cause stalling of SecM elongation at Gly165, thereby producing peptidyl-glycyl-tRNA stably bound to the ribosomal P-site7-9
. More importantly, it was found that this 17 amino acid long sequence can be fused to the C-terminus of virtually any full-length and/or truncated protein thus allowing the production of RNCs carrying nascent chains of predetermined sizes7
. Thus, when fused or inserted into the target protein, SecM stalling sequence produces arrest of the polypeptide chain elongation and generates stable RNCs both in vivo
in E. coli
cells and in vitro
in a cell-free system. Sucrose gradient centrifugation is further utilized to isolate RNCs.
The isolated RNCs can be used to analyze structural and functional features of the co-translational folding intermediates. Recently, this technique has been successfully used to gain insights into the structure of several ribosome bound nascent chains10,11
. Here we describe the isolation of bovine Gamma-B Crystallin RNCs fused to SecM and generated in an in vitro
Molecular Biology, Issue 64, Ribosome, nascent polypeptides, co-translational protein folding, translational arrest, in vitro translation
High Throughput Screening of Fungal Endoglucanase Activity in Escherichia coli
Institutions: California Institute of Technology, California Institute of Technology.
Cellulase enzymes (endoglucanases, cellobiohydrolases, and β-glucosidases) hydrolyze cellulose into component sugars, which in turn can be converted into fuel alcohols1
. The potential for enzymatic hydrolysis of cellulosic biomass to provide renewable energy has intensified efforts to engineer cellulases for economical fuel production2
. Of particular interest are fungal cellulases3-8
, which are already being used industrially for foods and textiles processing.
Identifying active variants among a library of mutant cellulases is critical to the engineering process; active mutants can be further tested for improved properties and/or subjected to additional mutagenesis. Efficient engineering of fungal cellulases has been hampered by a lack of genetic tools for native organisms and by difficulties in expressing the enzymes in heterologous hosts. Recently, Morikawa and coworkers developed a method for expressing in E. coli
the catalytic domains of endoglucanases from H. jecorina3,9
, an important industrial fungus with the capacity to secrete cellulases in large quantities. Functional E. coli
expression has also been reported for cellulases from other fungi, including Macrophomina phaseolina10
and Phanerochaete chrysosporium11-12
We present a method for high throughput screening of fungal endoglucanase activity in E. coli
. (Fig 1
) This method uses the common microbial dye Congo Red (CR) to visualize enzymatic degradation of carboxymethyl cellulose (CMC) by cells growing on solid medium. The activity assay requires inexpensive reagents, minimal manipulation, and gives unambiguous results as zones of degradation (“halos”) at the colony site. Although a quantitative measure of enzymatic activity cannot be determined by this method, we have found that halo size correlates with total enzymatic activity in the cell. Further characterization of individual positive clones will determine , relative protein fitness.
Traditional bacterial whole cell CMC/CR activity assays13
involve pouring agar containing CMC onto colonies, which is subject to cross-contamination, or incubating cultures in CMC agar wells, which is less amenable to large-scale experimentation. Here we report an improved protocol that modifies existing wash methods14
for cellulase activity: cells grown on CMC agar plates are removed prior to CR staining. Our protocol significantly reduces cross-contamination and is highly scalable, allowing the rapid screening of thousands of clones. In addition to H. jecorina enzymes
, we have expressed and screened endoglucanase variants from the Thermoascus aurantiacus
and Penicillium decumbens
(shown in Figure 2
), suggesting that this protocol is applicable to enzymes from a range of organisms.
Molecular Biology, Issue 54, cellulase, endoglucanase, CMC, Congo Red
Nucleoside Triphosphates - From Synthesis to Biochemical Characterization
Institutions: University of Bern.
The traditional strategy for the introduction of chemical functionalities is the use of solid-phase synthesis by appending suitably modified phosphoramidite precursors to the nascent chain. However, the conditions used during the synthesis and the restriction to rather short sequences hamper the applicability of this methodology. On the other hand, modified nucleoside triphosphates are activated building blocks that have been employed for the mild introduction of numerous functional groups into nucleic acids, a strategy that paves the way for the use of modified nucleic acids in a wide-ranging palette of practical applications such as functional tagging and generation of ribozymes and DNAzymes. One of the major challenges resides in the intricacy of the methodology leading to the isolation and characterization of these nucleoside analogues.
In this video article, we present a detailed protocol for the synthesis of these modified analogues using phosphorous(III)-based reagents. In addition, the procedure for their biochemical characterization is divulged, with a special emphasis on primer extension reactions and TdT tailing polymerization. This detailed protocol will be of use for the crafting of modified dNTPs and their further use in chemical biology.
Chemistry, Issue 86, Nucleic acid analogues, Bioorganic Chemistry, PCR, primer extension reactions, organic synthesis, PAGE, HPLC, nucleoside triphosphates
Methods to Identify the NMR Resonances of the 13C-Dimethyl N-terminal Amine on Reductively Methylated Proteins
Institutions: Louisiana State University.
Nuclear magnetic resonance (NMR) spectroscopy is a proven technique for protein structure and dynamic studies. To study proteins with NMR, stable magnetic isotopes are typically incorporated metabolically to improve the sensitivity and allow for sequential resonance assignment. Reductive 13
C-methylation is an alternative labeling method for proteins that are not amenable to bacterial host over-expression, the most common method of isotope incorporation. Reductive 13
C-methylation is a chemical reaction performed under mild conditions that modifies a protein's primary amino groups (lysine ε-amino groups and the N
-terminal α-amino group) to 13
C-dimethylamino groups. The structure and function of most proteins are not altered by the modification, making it a viable alternative to metabolic labeling. Because reductive 13
C-methylation adds sparse, isotopic labels, traditional methods of assigning the NMR signals are not applicable. An alternative assignment method using mass spectrometry (MS) to aid in the assignment of protein 13
C-dimethylamine NMR signals has been developed. The method relies on partial and different amounts of 13
C-labeling at each primary amino group. One limitation of the method arises when the protein's N
-terminal residue is a lysine because the α- and ε-dimethylamino groups of Lys1 cannot be individually measured with MS. To circumvent this limitation, two methods are described to identify the NMR resonance of the 13
C-dimethylamines associated with both the N
-terminal α-amine and the side chain ε-amine. The NMR signals of the N
-terminal α-dimethylamine and the side chain ε-dimethylamine of hen egg white lysozyme, Lys1, are identified in 1
C heteronuclear single-quantum coherence spectra.
Chemistry, Issue 82, Boranes, Formaldehyde, Dimethylamines, Tandem Mass Spectrometry, nuclear magnetic resonance, MALDI-TOF, Reductive methylation, lysozyme, dimethyllysine, mass spectrometry, NMR
A High Throughput MHC II Binding Assay for Quantitative Analysis of Peptide Epitopes
Institutions: Dartmouth College, University of Rhode Island, Dartmouth College.
Biochemical assays with recombinant human MHC II molecules can provide rapid, quantitative insights into immunogenic epitope identification, deletion, or design1,2
. Here, a peptide-MHC II binding assay is scaled to 384-well format. The scaled down protocol reduces reagent costs by 75% and is higher throughput than previously described 96-well protocols1,3-5
. Specifically, the experimental design permits robust and reproducible analysis of up to 15 peptides against one MHC II allele per 384-well ELISA plate. Using a single liquid handling robot, this method allows one researcher to analyze approximately ninety test peptides in triplicate over a range of eight concentrations and four MHC II allele types in less than 48 hr. Others working in the fields of protein deimmunization or vaccine design and development may find the protocol to be useful in facilitating their own work. In particular, the step-by-step instructions and the visual format of JoVE should allow other users to quickly and easily establish this methodology in their own labs.
Biochemistry, Issue 85, Immunoassay, Protein Immunogenicity, MHC II, T cell epitope, High Throughput Screen, Deimmunization, Vaccine Design
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro
replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
Designing Silk-silk Protein Alloy Materials for Biomedical Applications
Institutions: Rowan University, Rowan University, Cooper Medical School of Rowan University, Rowan University.
Fibrous proteins display different sequences and structures that have been used for various applications in biomedical fields such as biosensors, nanomedicine, tissue regeneration, and drug delivery. Designing materials based on the molecular-scale interactions between these proteins will help generate new multifunctional protein alloy biomaterials with tunable properties. Such alloy material systems also provide advantages in comparison to traditional synthetic polymers due to the materials biodegradability, biocompatibility, and tenability in the body. This article used the protein blends of wild tussah silk (Antheraea pernyi
) and domestic mulberry silk (Bombyx mori
) as an example to provide useful protocols regarding these topics, including how to predict protein-protein interactions by computational methods, how to produce protein alloy solutions, how to verify alloy systems by thermal analysis, and how to fabricate variable alloy materials including optical materials with diffraction gratings, electric materials with circuits coatings, and pharmaceutical materials for drug release and delivery. These methods can provide important information for designing the next generation multifunctional biomaterials based on different protein alloys.
Bioengineering, Issue 90, protein alloys, biomaterials, biomedical, silk blends, computational simulation, implantable electronic devices
Assessment of Immunologically Relevant Dynamic Tertiary Structural Features of the HIV-1 V3 Loop Crown R2 Sequence by ab initio Folding
Institutions: School of Medicine, New York University.
The antigenic diversity of HIV-1 has long been an obstacle to vaccine design, and this variability is especially pronounced in the V3 loop of the virus' surface envelope glycoprotein. We previously proposed that the crown of the V3 loop, although dynamic and sequence variable, is constrained throughout the population of HIV-1 viruses to an immunologically relevant β-hairpin tertiary structure. Importantly, there are thousands of different V3 loop crown sequences in circulating HIV-1 viruses, making 3D structural characterization of trends across the diversity of viruses difficult or impossible by crystallography or NMR. Our previous successful studies with folding of the V3 crown1, 2
used the ab initio
accessible in the ICM-Pro molecular modeling software package (Molsoft LLC, La Jolla, CA) and suggested that the crown of the V3 loop, specifically from positions 10 to 22, benefits sufficiently from the flexibility and length of its flanking stems to behave to a large degree as if it were an unconstrained peptide freely folding in solution. As such, rapid ab initio
folding of just this portion of the V3 loop of any individual strain of the 60,000+ circulating HIV-1 strains can be informative. Here, we folded the V3 loop of the R2 strain to gain insight into the structural basis of its unique properties. R2 bears a rare V3 loop sequence thought to be responsible for the exquisite sensitivity of this strain to neutralization by patient sera and monoclonal antibodies4, 5
. The strain mediates CD4-independent infection and appears to elicit broadly neutralizing antibodies. We demonstrate how evaluation of the results of the folding can be informative for associating observed structures in the folding with the immunological activities observed for R2.
Infection, Issue 43, HIV-1, structure-activity relationships, ab initio simulations, antibody-mediated neutralization, vaccine design
Sequence-specific Labeling of Nucleic Acids and Proteins with Methyltransferases and Cofactor Analogues
Institutions: RWTH Aachen University.
-Adenosyl-l-methionine (AdoMet or SAM)-dependent methyltransferases (MTase) catalyze the transfer of the activated methyl group from AdoMet to specific positions in DNA, RNA, proteins and small biomolecules. This natural methylation reaction can be expanded to a wide variety of alkylation reactions using synthetic cofactor analogues. Replacement of the reactive sulfonium center of AdoMet with an aziridine ring leads to cofactors which can be coupled with DNA by various DNA MTases. These aziridine cofactors can be equipped with reporter groups at different positions of the adenine moiety and used for S
of DNA (SMILing DNA). As a typical example we give a protocol for biotinylation of pBR322 plasmid DNA at the 5’-ATCGA
T-3’ sequence with the DNA MTase M.BseCI and the aziridine cofactor 6BAz in one step. Extension of the activated methyl group with unsaturated alkyl groups results in another class of AdoMet analogues which are used for m
ransfer of A
roups (mTAG). Since the extended side chains are activated by the sulfonium center and the unsaturated bond, these cofactors are called double-activated AdoMet analogues. These analogues not only function as cofactors for DNA MTases, like the aziridine cofactors, but also for RNA, protein and small molecule MTases. They are typically used for enzymatic modification of MTase substrates with unique functional groups which are labeled with reporter groups in a second chemical step. This is exemplified in a protocol for fluorescence labeling of histone H3 protein. A small propargyl group is transferred from the cofactor analogue SeAdoYn to the protein by the histone H3 lysine 4 (H3K4) MTase Set7/9 followed by click labeling of the alkynylated histone H3 with TAMRA azide. MTase-mediated labeling with cofactor analogues is an enabling technology for many exciting applications including identification and functional study of MTase substrates as well as DNA genotyping and methylation detection.
Biochemistry, Issue 93, S-adenosyl-l-methionine, AdoMet, SAM, aziridine cofactor, double activated cofactor, methyltransferase, DNA methylation, protein methylation, biotin labeling, fluorescence labeling, SMILing, mTAG
The ITS2 Database
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1
and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8
The ITS2 Database9
presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11
. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12
(direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13
. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold.
The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14
search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16
for multiple sequence-structure alignment calculation and Neighbor Joining18
tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure.
In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
RNA Secondary Structure Prediction Using High-throughput SHAPE
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico
RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
Structure and Coordination Determination of Peptide-metal Complexes Using 1D and 2D 1H NMR
Institutions: The Hebrew University of Jerusalem, The Hebrew University of Jerusalem.
Copper (I) binding by metallochaperone transport proteins prevents copper oxidation and release of the toxic ions that may participate in harmful redox reactions. The Cu (I) complex of the peptide model of a Cu (I) binding metallochaperone protein, which includes the sequence MTCSGCSRPG (underlined is conserved), was determined in solution under inert conditions by NMR spectroscopy.
NMR is a widely accepted technique for the determination of solution structures of proteins and peptides. Due to difficulty in crystallization to provide single crystals suitable for X-ray crystallography, the NMR technique is extremely valuable, especially as it provides information on the solution state rather than the solid state. Herein we describe all steps that are required for full three-dimensional structure determinations by NMR. The protocol includes sample preparation in an NMR tube, 1D and 2D data collection and processing, peak assignment and integration, molecular mechanics calculations, and structure analysis. Importantly, the analysis was first conducted without any preset metal-ligand bonds, to assure a reliable structure determination in an unbiased manner.
Chemistry, Issue 82, solution structure determination, NMR, peptide models, copper-binding proteins, copper complexes
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1
H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g.
crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
Analyzing and Building Nucleic Acid Structures with 3DNA
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at https://w3dna.rutgers.edu, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Contrast Enhanced Vessel Imaging using MicroCT
Institutions: University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio , University of Texas Health Science Center at San Antonio .
Microscopic computed tomography (microCT) offers high-resolution volumetric imaging of the anatomy of living small animals. However, the contrast between different soft tissues and body fluids is inherently poor in micro-CT images 1
. Under these circumstances, visualization of blood vessels becomes a nearly impossible task. To overcome this and to improve the visualization of blood vessels exogenous contrast agents can be used. Herein, we present a methodology for visualizing the vascular network in a rodent model. By using a long-acting aqueous colloidal polydisperse iodinated blood-pool contrast agent, eXIA 160XL, we optimized image acquisition parameters and volume-rendering techniques for finding blood vessels in live animals. Our findings suggest that, to achieve a superior contrast between bone and soft tissue from vessel, multiple-frames (at least 5-8/ frames per view), and 360-720 views (for a full 360° rotation) acquisitions were mandatory. We have also demonstrated the use of a two-dimensional transfer function (where voxel color and opacity was assigned in proportion to CT value and gradient magnitude), in visualizing the anatomy and highlighting the structure of interest, the blood vessel network. This promising work lays a foundation for the qualitative and quantitative assessment of anti-angiogenesis preclinical studies using transgenic or xenograft tumor-bearing mice.
Medicine, Issue 47, vessel imaging, eXIA 160XL, microCT, advanced visualization, 2DTF