JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Rationalisation of the differences between APOBEC3G structures from crystallography and NMR studies by molecular dynamics simulations.
PUBLISHED: 03-29-2010
The human APOBEC3G (A3G) protein is a cellular polynucleotide cytidine deaminase that acts as a host restriction factor of retroviruses, including HIV-1 and various transposable elements. Recently, three NMR and two crystal structures of the catalytic deaminase domain of A3G have been reported, but these are in disagreement over the conformation of a terminal beta-strand, beta2, as well as the identification of a putative DNA binding site. We here report molecular dynamics simulations with all of the solved A3G catalytic domain structures, taking into account solubility enhancing mutations that were introduced during derivation of three out of the five structures. In the course of these simulations, we observed a general trend towards increased definition of the beta2 strand for those structures that have a distorted starting conformation of beta2. Solvent density maps around the protein as calculated from MD simulations indicated that this distortion is dependent on preferential hydration of residues within the beta2 strand. We also demonstrate that the identification of a pre-defined DNA binding site is prevented by the inherent flexibility of loops that determine access to the deaminase catalytic core. We discuss the implications of our analyses for the as yet unresolved structure of the full-length A3G protein and its biological functions with regard to hypermutation of DNA.
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Published: 11-03-2011
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
23 Related JoVE Articles!
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Assessing Somatic Hypermutation in Ramos B Cells after Overexpression or Knockdown of Specific Genes
Authors: Dana C. Upton, Shyam Unniraman.
Institutions: Duke University .
B cells start their life with low affinity antibodies generated by V(D)J recombination. However, upon detecting a pathogen, the variable (V) region of an immunoglobulin (Ig) gene is mutated approximately 100,000-fold more than the rest of the genome through somatic hypermutation (SHM), resulting in high affinity antibodies1,2. In addition, class switch recombination (CSR) produces antibodies with different effector functions depending on the kind of immune response that is needed for a particular pathogen. Both CSR and SHM are initiated by activation-induced cytidine deaminase (AID), which deaminates cytosine residues in DNA to produce uracils. These uracils are processed by error-prone forms of repair pathways, eventually leading to mutations and recombination1-3. Our current understanding of the molecular details of SHM and CSR come from a combination of studies in mice, primary cells, cell lines, and cell-free experiments. Mouse models remain the gold standard with genetic knockouts showing critical roles for many repair factors (e.g. Ung, Msh2, Msh6, Exo1, and polymerase η)4-10. However, not all genes are amenable for knockout studies. For example, knockouts of several double-strand break repair proteins are embryonically lethal or impair B-cell development11-14. Moreover, sometimes the specific function of a protein in SHM or CSR may be masked by more global defects caused by the knockout. In addition, since experiments in mice can be lengthy, altering expression of individual genes in cell lines has become an increasingly popular first step to identifying and characterizing candidate genes15-18. Ramos – a Burkitt lymphoma cell line that constitutively undergoes SHM – has been a popular cell-line model to study SHM18-24. One advantage of Ramos cells is that they have a built-in convenient semi-quantitative measure of SHM. Wild type cells express IgM and, as they pick up mutations, some of the mutations knock out IgM expression. Therefore, assaying IgM loss by fluorescence-activated cell scanning (FACS) provides a quick read-out for the level of SHM. A more quantitative measurement of SHM can be obtained by directly sequencing the antibody genes. Since Ramos cells are difficult to transfect, we produce stable derivatives that have increased or lowered expression of an individual gene by infecting cells with retroviral or lentiviral constructs that contain either an overexpression cassette or a short hairpin RNA (shRNA), respectively. Here, we describe how we infect Ramos cells and then use these cells to investigate the role of specific genes on SHM (Figure 1).
Immunology, Issue 57, activation-induced cytidine deaminase, lentiviral infection, retroviral infection, Ramos, shRNA, somatic hypermutation
Play Button
In vitro Reconstitution of the Active T. castaneum Telomerase
Authors: Anthony P. Schuller, Michael J. Harkisheimer, Emmanuel Skordalakes.
Institutions: University of Pennsylvania.
Efforts to isolate the catalytic subunit of telomerase, TERT, in sufficient quantities for structural studies, have been met with limited success for more than a decade. Here, we present methods for the isolation of the recombinant Tribolium castaneum TERT (TcTERT) and the reconstitution of the active T. castaneum telomerase ribonucleoprotein (RNP) complex in vitro. Telomerase is a specialized reverse transcriptase1 that adds short DNA repeats, called telomeres, to the 3' end of linear chromosomes2 that serve to protect them from end-to-end fusion and degradation. Following DNA replication, a short segment is lost at the end of the chromosome3 and without telomerase, cells continue dividing until eventually reaching their Hayflick Limit4. Additionally, telomerase is dormant in most somatic cells5 in adults, but is active in cancer cells6 where it promotes cell immortality7. The minimal telomerase enzyme consists of two core components: the protein subunit (TERT), which comprises the catalytic subunit of the enzyme and an integral RNA component (TER), which contains the template TERT uses to synthesize telomeres8,9. Prior to 2008, only structures for individual telomerase domains had been solved10,11. A major breakthrough in this field came from the determination of the crystal structure of the active12, catalytic subunit of T. castaneum telomerase, TcTERT1. Here, we present methods for producing large quantities of the active, soluble TcTERT for structural and biochemical studies, and the reconstitution of the telomerase RNP complex in vitro for telomerase activity assays. An overview of the experimental methods used is shown in Figure 1.
Molecular Biology, Issue 53, Telomerase, protein expression, purification, chromatography, RNA isolation, TRAP
Play Button
Methods to Identify the NMR Resonances of the 13C-Dimethyl N-terminal Amine on Reductively Methylated Proteins
Authors: Kevin J. Roberson, Pamlea N. Brady, Michelle M. Sweeney, Megan A. Macnaughtan.
Institutions: Louisiana State University.
Nuclear magnetic resonance (NMR) spectroscopy is a proven technique for protein structure and dynamic studies. To study proteins with NMR, stable magnetic isotopes are typically incorporated metabolically to improve the sensitivity and allow for sequential resonance assignment. Reductive 13C-methylation is an alternative labeling method for proteins that are not amenable to bacterial host over-expression, the most common method of isotope incorporation. Reductive 13C-methylation is a chemical reaction performed under mild conditions that modifies a protein's primary amino groups (lysine ε-amino groups and the N-terminal α-amino group) to 13C-dimethylamino groups. The structure and function of most proteins are not altered by the modification, making it a viable alternative to metabolic labeling. Because reductive 13C-methylation adds sparse, isotopic labels, traditional methods of assigning the NMR signals are not applicable. An alternative assignment method using mass spectrometry (MS) to aid in the assignment of protein 13C-dimethylamine NMR signals has been developed. The method relies on partial and different amounts of 13C-labeling at each primary amino group. One limitation of the method arises when the protein's N-terminal residue is a lysine because the α- and ε-dimethylamino groups of Lys1 cannot be individually measured with MS. To circumvent this limitation, two methods are described to identify the NMR resonance of the 13C-dimethylamines associated with both the N-terminal α-amine and the side chain ε-amine. The NMR signals of the N-terminal α-dimethylamine and the side chain ε-dimethylamine of hen egg white lysozyme, Lys1, are identified in 1H-13C heteronuclear single-quantum coherence spectra.
Chemistry, Issue 82, Boranes, Formaldehyde, Dimethylamines, Tandem Mass Spectrometry, nuclear magnetic resonance, MALDI-TOF, Reductive methylation, lysozyme, dimethyllysine, mass spectrometry, NMR
Play Button
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Authors: Kwyn A. Meagher, Benjamin N. Doblack, Mercedes Ramirez, Lilian P. Davila.
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
Play Button
Synthesis and Characterization of Functionalized Metal-organic Frameworks
Authors: Olga Karagiaridi, Wojciech Bury, Amy A. Sarjeant, Joseph T. Hupp, Omar K. Farha.
Institutions: Northwestern University, Warsaw University of Technology, King Abdulaziz University.
Metal-organic frameworks have attracted extraordinary amounts of research attention, as they are attractive candidates for numerous industrial and technological applications. Their signature property is their ultrahigh porosity, which however imparts a series of challenges when it comes to both constructing them and working with them. Securing desired MOF chemical and physical functionality by linker/node assembly into a highly porous framework of choice can pose difficulties, as less porous and more thermodynamically stable congeners (e.g., other crystalline polymorphs, catenated analogues) are often preferentially obtained by conventional synthesis methods. Once the desired product is obtained, its characterization often requires specialized techniques that address complications potentially arising from, for example, guest-molecule loss or preferential orientation of microcrystallites. Finally, accessing the large voids inside the MOFs for use in applications that involve gases can be problematic, as frameworks may be subject to collapse during removal of solvent molecules (remnants of solvothermal synthesis). In this paper, we describe synthesis and characterization methods routinely utilized in our lab either to solve or circumvent these issues. The methods include solvent-assisted linker exchange, powder X-ray diffraction in capillaries, and materials activation (cavity evacuation) by supercritical CO2 drying. Finally, we provide a protocol for determining a suitable pressure region for applying the Brunauer-Emmett-Teller analysis to nitrogen isotherms, so as to estimate surface area of MOFs with good accuracy.
Chemistry, Issue 91, Metal-organic frameworks, porous coordination polymers, supercritical CO2 activation, crystallography, solvothermal, sorption, solvent-assisted linker exchange
Play Button
Metabolomic Analysis of Rat Brain by High Resolution Nuclear Magnetic Resonance Spectroscopy of Tissue Extracts
Authors: Norbert W. Lutz, Evelyne Béraud, Patrick J. Cozzone.
Institutions: Aix-Marseille Université, Aix-Marseille Université.
Studies of gene expression on the RNA and protein levels have long been used to explore biological processes underlying disease. More recently, genomics and proteomics have been complemented by comprehensive quantitative analysis of the metabolite pool present in biological systems. This strategy, termed metabolomics, strives to provide a global characterization of the small-molecule complement involved in metabolism. While the genome and the proteome define the tasks cells can perform, the metabolome is part of the actual phenotype. Among the methods currently used in metabolomics, spectroscopic techniques are of special interest because they allow one to simultaneously analyze a large number of metabolites without prior selection for specific biochemical pathways, thus enabling a broad unbiased approach. Here, an optimized experimental protocol for metabolomic analysis by high-resolution NMR spectroscopy is presented, which is the method of choice for efficient quantification of tissue metabolites. Important strengths of this method are (i) the use of crude extracts, without the need to purify the sample and/or separate metabolites; (ii) the intrinsically quantitative nature of NMR, permitting quantitation of all metabolites represented by an NMR spectrum with one reference compound only; and (iii) the nondestructive nature of NMR enabling repeated use of the same sample for multiple measurements. The dynamic range of metabolite concentrations that can be covered is considerable due to the linear response of NMR signals, although metabolites occurring at extremely low concentrations may be difficult to detect. For the least abundant compounds, the highly sensitive mass spectrometry method may be advantageous although this technique requires more intricate sample preparation and quantification procedures than NMR spectroscopy. We present here an NMR protocol adjusted to rat brain analysis; however, the same protocol can be applied to other tissues with minor modifications.
Neuroscience, Issue 91, metabolomics, brain tissue, rodents, neurochemistry, tissue extracts, NMR spectroscopy, quantitative metabolite analysis, cerebral metabolism, metabolic profile
Play Button
Optimized Negative Staining: a High-throughput Protocol for Examining Small and Asymmetric Protein Structure by Electron Microscopy
Authors: Matthew Rames, Yadong Yu, Gang Ren.
Institutions: The Molecular Foundry.
Structural determination of proteins is rather challenging for proteins with molecular masses between 40 - 200 kDa. Considering that more than half of natural proteins have a molecular mass between 40 - 200 kDa1,2, a robust and high-throughput method with a nanometer resolution capability is needed. Negative staining (NS) electron microscopy (EM) is an easy, rapid, and qualitative approach which has frequently been used in research laboratories to examine protein structure and protein-protein interactions. Unfortunately, conventional NS protocols often generate structural artifacts on proteins, especially with lipoproteins that usually form presenting rouleaux artifacts. By using images of lipoproteins from cryo-electron microscopy (cryo-EM) as a standard, the key parameters in NS specimen preparation conditions were recently screened and reported as the optimized NS protocol (OpNS), a modified conventional NS protocol 3 . Artifacts like rouleaux can be greatly limited by OpNS, additionally providing high contrast along with reasonably high‐resolution (near 1 nm) images of small and asymmetric proteins. These high-resolution and high contrast images are even favorable for an individual protein (a single object, no average) 3D reconstruction, such as a 160 kDa antibody, through the method of electron tomography4,5. Moreover, OpNS can be a high‐throughput tool to examine hundreds of samples of small proteins. For example, the previously published mechanism of 53 kDa cholesteryl ester transfer protein (CETP) involved the screening and imaging of hundreds of samples 6. Considering cryo-EM rarely successfully images proteins less than 200 kDa has yet to publish any study involving screening over one hundred sample conditions, it is fair to call OpNS a high-throughput method for studying small proteins. Hopefully the OpNS protocol presented here can be a useful tool to push the boundaries of EM and accelerate EM studies into small protein structure, dynamics and mechanisms.
Environmental Sciences, Issue 90, small and asymmetric protein structure, electron microscopy, optimized negative staining
Play Button
Identifying Protein-protein Interaction Sites Using Peptide Arrays
Authors: Hadar Amartely, Anat Iosub-Amir, Assaf Friedler.
Institutions: The Hebrew University of Jerusalem.
Protein-protein interactions mediate most of the processes in the living cell and control homeostasis of the organism. Impaired protein interactions may result in disease, making protein interactions important drug targets. It is thus highly important to understand these interactions at the molecular level. Protein interactions are studied using a variety of techniques ranging from cellular and biochemical assays to quantitative biophysical assays, and these may be performed either with full-length proteins, with protein domains or with peptides. Peptides serve as excellent tools to study protein interactions since peptides can be easily synthesized and allow the focusing on specific interaction sites. Peptide arrays enable the identification of the interaction sites between two proteins as well as screening for peptides that bind the target protein for therapeutic purposes. They also allow high throughput SAR studies. For identification of binding sites, a typical peptide array usually contains partly overlapping 10-20 residues peptides derived from the full sequences of one or more partner proteins of the desired target protein. Screening the array for binding the target protein reveals the binding peptides, corresponding to the binding sites in the partner proteins, in an easy and fast method using only small amount of protein. In this article we describe a protocol for screening peptide arrays for mapping the interaction sites between a target protein and its partners. The peptide array is designed based on the sequences of the partner proteins taking into account their secondary structures. The arrays used in this protocol were Celluspots arrays prepared by INTAVIS Bioanalytical Instruments. The array is blocked to prevent unspecific binding and then incubated with the studied protein. Detection using an antibody reveals the binding peptides corresponding to the specific interaction sites between the proteins.
Molecular Biology, Issue 93, peptides, peptide arrays, protein-protein interactions, binding sites, peptide synthesis, micro-arrays
Play Button
Assessment of Immunologically Relevant Dynamic Tertiary Structural Features of the HIV-1 V3 Loop Crown R2 Sequence by ab initio Folding
Authors: David Almond, Timothy Cardozo.
Institutions: School of Medicine, New York University.
The antigenic diversity of HIV-1 has long been an obstacle to vaccine design, and this variability is especially pronounced in the V3 loop of the virus' surface envelope glycoprotein. We previously proposed that the crown of the V3 loop, although dynamic and sequence variable, is constrained throughout the population of HIV-1 viruses to an immunologically relevant β-hairpin tertiary structure. Importantly, there are thousands of different V3 loop crown sequences in circulating HIV-1 viruses, making 3D structural characterization of trends across the diversity of viruses difficult or impossible by crystallography or NMR. Our previous successful studies with folding of the V3 crown1, 2 used the ab initio algorithm 3 accessible in the ICM-Pro molecular modeling software package (Molsoft LLC, La Jolla, CA) and suggested that the crown of the V3 loop, specifically from positions 10 to 22, benefits sufficiently from the flexibility and length of its flanking stems to behave to a large degree as if it were an unconstrained peptide freely folding in solution. As such, rapid ab initio folding of just this portion of the V3 loop of any individual strain of the 60,000+ circulating HIV-1 strains can be informative. Here, we folded the V3 loop of the R2 strain to gain insight into the structural basis of its unique properties. R2 bears a rare V3 loop sequence thought to be responsible for the exquisite sensitivity of this strain to neutralization by patient sera and monoclonal antibodies4, 5. The strain mediates CD4-independent infection and appears to elicit broadly neutralizing antibodies. We demonstrate how evaluation of the results of the folding can be informative for associating observed structures in the folding with the immunological activities observed for R2.
Infection, Issue 43, HIV-1, structure-activity relationships, ab initio simulations, antibody-mediated neutralization, vaccine design
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Structure and Coordination Determination of Peptide-metal Complexes Using 1D and 2D 1H NMR
Authors: Michal S. Shoshan, Edit Y. Tshuva, Deborah E. Shalev.
Institutions: The Hebrew University of Jerusalem, The Hebrew University of Jerusalem.
Copper (I) binding by metallochaperone transport proteins prevents copper oxidation and release of the toxic ions that may participate in harmful redox reactions. The Cu (I) complex of the peptide model of a Cu (I) binding metallochaperone protein, which includes the sequence MTCSGCSRPG (underlined is conserved), was determined in solution under inert conditions by NMR spectroscopy. NMR is a widely accepted technique for the determination of solution structures of proteins and peptides. Due to difficulty in crystallization to provide single crystals suitable for X-ray crystallography, the NMR technique is extremely valuable, especially as it provides information on the solution state rather than the solid state. Herein we describe all steps that are required for full three-dimensional structure determinations by NMR. The protocol includes sample preparation in an NMR tube, 1D and 2D data collection and processing, peak assignment and integration, molecular mechanics calculations, and structure analysis. Importantly, the analysis was first conducted without any preset metal-ligand bonds, to assure a reliable structure determination in an unbiased manner.
Chemistry, Issue 82, solution structure determination, NMR, peptide models, copper-binding proteins, copper complexes
Play Button
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Authors: Nikolai Hentze, Matthias P. Mayer.
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1H/2H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g. crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.   
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Play Button
Genetically-encoded Molecular Probes to Study G Protein-coupled Receptors
Authors: Saranga Naganathan, Amy Grunbeck, He Tian, Thomas Huber, Thomas P. Sakmar.
Institutions: The Rockefeller University.
To facilitate structural and dynamic studies of G protein-coupled receptor (GPCR) signaling complexes, new approaches are required to introduce informative probes or labels into expressed receptors that do not perturb receptor function. We used amber codon suppression technology to genetically-encode the unnatural amino acid, p-azido-L-phenylalanine (azF) at various targeted positions in GPCRs heterologously expressed in mammalian cells. The versatility of the azido group is illustrated here in different applications to study GPCRs in their native cellular environment or under detergent solubilized conditions. First, we demonstrate a cell-based targeted photocrosslinking technology to identify the residues in the ligand-binding pocket of GPCR where a tritium-labeled small-molecule ligand is crosslinked to a genetically-encoded azido amino acid. We then demonstrate site-specific modification of GPCRs by the bioorthogonal Staudinger-Bertozzi ligation reaction that targets the azido group using phosphine derivatives. We discuss a general strategy for targeted peptide-epitope tagging of expressed membrane proteins in-culture and its detection using a whole-cell-based ELISA approach. Finally, we show that azF-GPCRs can be selectively tagged with fluorescent probes. The methodologies discussed are general, in that they can in principle be applied to any amino acid position in any expressed GPCR to interrogate active signaling complexes.
Genetics, Issue 79, Receptors, G-Protein-Coupled, Protein Engineering, Signal Transduction, Biochemistry, Unnatural amino acid, site-directed mutagenesis, G protein-coupled receptor, targeted photocrosslinking, bioorthogonal labeling, targeted epitope tagging
Play Button
Designing a Bio-responsive Robot from DNA Origami
Authors: Eldad Ben-Ishay, Almogit Abu-Horowitz, Ido Bachelet.
Institutions: Bar-Ilan University.
Nucleic acids are astonishingly versatile. In addition to their natural role as storage medium for biological information1, they can be utilized in parallel computing2,3 , recognize and bind molecular or cellular targets4,5 , catalyze chemical reactions6,7 , and generate calculated responses in a biological system8,9. Importantly, nucleic acids can be programmed to self-assemble into 2D and 3D structures10-12, enabling the integration of all these remarkable features in a single robot linking the sensing of biological cues to a preset response in order to exert a desired effect. Creating shapes from nucleic acids was first proposed by Seeman13, and several variations on this theme have since been realized using various techniques11,12,14,15 . However, the most significant is perhaps the one proposed by Rothemund, termed scaffolded DNA origami16. In this technique, the folding of a long (>7,000 bases) single-stranded DNA 'scaffold' is directed to a desired shape by hundreds of short complementary strands termed 'staples'. Folding is carried out by temperature annealing ramp. This technique was successfully demonstrated in the creation of a diverse array of 2D shapes with remarkable precision and robustness. DNA origami was later extended to 3D as well17,18 . The current paper will focus on the caDNAno 2.0 software19 developed by Douglas and colleagues. caDNAno is a robust, user-friendly CAD tool enabling the design of 2D and 3D DNA origami shapes with versatile features. The design process relies on a systematic and accurate abstraction scheme for DNA structures, making it relatively straightforward and efficient. In this paper we demonstrate the design of a DNA origami nanorobot that has been recently described20. This robot is 'robotic' in the sense that it links sensing to actuation, in order to perform a task. We explain how various sensing schemes can be integrated into the structure, and how this can be relayed to a desired effect. Finally we use Cando21 to simulate the mechanical properties of the designed shape. The concept we discuss can be adapted to multiple tasks and settings.
Bioengineering, Issue 77, Genetics, Biomedical Engineering, Molecular Biology, Medicine, Genomics, Nanotechnology, Nanomedicine, DNA origami, nanorobot, caDNAno, DNA, DNA Origami, nucleic acids, DNA structures, CAD, sequencing
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
Play Button
Steady-state, Pre-steady-state, and Single-turnover Kinetic Measurement for DNA Glycosylase Activity
Authors: Akira Sassa, William A. Beard, David D. Shock, Samuel H. Wilson.
Institutions: NIEHS, National Institutes of Health.
Human 8-oxoguanine DNA glycosylase (OGG1) excises the mutagenic oxidative DNA lesion 8-oxo-7,8-dihydroguanine (8-oxoG) from DNA. Kinetic characterization of OGG1 is undertaken to measure the rates of 8-oxoG excision and product release. When the OGG1 concentration is lower than substrate DNA, time courses of product formation are biphasic; a rapid exponential phase (i.e. burst) of product formation is followed by a linear steady-state phase. The initial burst of product formation corresponds to the concentration of enzyme properly engaged on the substrate, and the burst amplitude depends on the concentration of enzyme. The first-order rate constant of the burst corresponds to the intrinsic rate of 8-oxoG excision and the slower steady-state rate measures the rate of product release (product DNA dissociation rate constant, koff). Here, we describe steady-state, pre-steady-state, and single-turnover approaches to isolate and measure specific steps during OGG1 catalytic cycling. A fluorescent labeled lesion-containing oligonucleotide and purified OGG1 are used to facilitate precise kinetic measurements. Since low enzyme concentrations are used to make steady-state measurements, manual mixing of reagents and quenching of the reaction can be performed to ascertain the steady-state rate (koff). Additionally, extrapolation of the steady-state rate to a point on the ordinate at zero time indicates that a burst of product formation occurred during the first turnover (i.e. y-intercept is positive). The first-order rate constant of the exponential burst phase can be measured using a rapid mixing and quenching technique that examines the amount of product formed at short time intervals (<1 sec) before the steady-state phase and corresponds to the rate of 8-oxoG excision (i.e. chemistry). The chemical step can also be measured using a single-turnover approach where catalytic cycling is prevented by saturating substrate DNA with enzyme (E>S). These approaches can measure elementary rate constants that influence the efficiency of removal of a DNA lesion.
Chemistry, Issue 78, Biochemistry, Genetics, Molecular Biology, Microbiology, Structural Biology, Chemical Biology, Eukaryota, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Enzymes and Coenzymes, Life Sciences (General), enzymology, rapid quench-flow, active site titration, steady-state, pre-steady-state, single-turnover, kinetics, base excision repair, DNA glycosylase, 8-oxo-7,8-dihydroguanine, 8-oxoG, sequencing
Play Button
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Authors: Katharina L. Dürr, Neslihan N. Tavraz, Susan Spiller, Thomas Friedrich.
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+,K+-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+,K+-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+ or Li+ transport by Na+,K+- or gastric H+,K+-ATPase in single cells. Using Xenopus oocytes as expression system, we determine the amount of Rb+ (Li+) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+ uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g. to quantitatively determine the 3Na+/2K+ transport stoichiometry of the Na+,K+-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+,K+-ATPase. In principle, the assay is not limited to K+-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
Play Button
Actin Co-Sedimentation Assay; for the Analysis of Protein Binding to F-Actin
Authors: Jyoti Srivastava, Diane Barber.
Institutions: University of California, San Francisco - UCSF.
The actin cytoskeleton within the cell is a network of actin filaments that allows the movement of cells and cellular processes, and that generates tension and helps maintains cellular shape. Although the actin cytoskeleton is a rigid structure, it is a dynamic structure that is constantly remodeling. A number of proteins can bind to the actin cytoskeleton. The binding of a particular protein to F-actin is often desired to support cell biological observations or to further understand dynamic processes due to remodeling of the actin cytoskeleton. The actin co-sedimentation assay is an in vitro assay routinely used to analyze the binding of specific proteins or protein domains with F-actin. The basic principles of the assay involve an incubation of the protein of interest (full length or domain of) with F-actin, ultracentrifugation step to pellet F-actin and analysis of the protein co-sedimenting with F-actin. Actin co-sedimentation assays can be designed accordingly to measure actin binding affinities and in competition assays.
Biochemistry, Issue 13, F-actin, protein, in vitro binding, ultracentrifugation
Play Button
Molecular Evolution of the Tre Recombinase
Authors: Frank Buchholz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells. We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution
Play Button
Interview: HIV-1 Proviral DNA Excision Using an Evolved Recombinase
Authors: Joachim Hauber.
Institutions: Heinrich-Pette-Institute for Experimental Virology and Immunology, University of Hamburg.
HIV-1 integrates into the host chromosome of infected cells and persists as a provirus flanked by long terminal repeats. Current treatment strategies primarily target virus enzymes or virus-cell fusion, suppressing the viral life cycle without eradicating the infection. Since the integrated provirus is not targeted by these approaches, new resistant strains of HIV-1 may emerge. Here, we report that the engineered recombinase Tre (see Molecular evolution of the Tre recombinase , Buchholz, F., Max Planck Institute for Cell Biology and Genetics, Dresden) efficiently excises integrated HIV-1 proviral DNA from the genome of infected cells. We produced loxLTR containing viral pseudotypes and infected HeLa cells to examine whether Tre recombinase can excise the provirus from the genome of HIV-1 infected human cells. A virus particle-releasing cell line was cloned and transfected with a plasmid expressing Tre or with a parental control vector. Recombinase activity and virus production were monitored. All assays demonstrated the efficient deletion of the provirus from infected cells without visible cytotoxic effects. These results serve as proof of principle that it is possible to evolve a recombinase to specifically target an HIV-1 LTR and that this recombinase is capable of excising the HIV-1 provirus from the genome of HIV-1-infected human cells. Before an engineered recombinase could enter the therapeutic arena, however, significant obstacles need to be overcome. Among the most critical issues, that we face, are an efficient and safe delivery to targeted cells and the absence of side effects.
Medicine, Issue 16, HIV, Cell Biology, Recombinase, provirus, HeLa Cells
Play Button
Analyzing and Building Nucleic Acid Structures with 3DNA
Authors: Andrew V. Colasanti, Xiang-Jun Lu, Wilma K. Olson.
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.