Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
21 Related JoVE Articles!
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Monitoring the Assembly of a Secreted Bacterial Virulence Factor Using Site-specific Crosslinking
Institutions: National Institutes of Health.
This article describes a method to detect and analyze dynamic interactions between a protein of interest and other factors in vivo
. Our method is based on the amber suppression technology that was originally developed by Peter Schultz and colleagues1
. An amber mutation is first introduced at a specific codon of the gene encoding the protein of interest. The amber mutant is then expressed in E. coli
together with genes encoding an amber suppressor tRNA and an amino acyl-tRNA synthetase derived from Methanococcus jannaschii
. Using this system, the photo activatable amino acid analog p-benzoylphenylalanine (Bpa) is incorporated at the amber codon. Cells are then irradiated with ultraviolet light to covalently link the Bpa residue to proteins that are located within 3-8 Å. Photocrosslinking is performed in combination with pulse-chase labeling and immunoprecipitation of the protein of interest in order to monitor changes in protein-protein interactions that occur over a time scale of seconds to minutes. We optimized the procedure to study the assembly of a bacterial virulence factor that consists of two independent domains, a domain that is integrated into the outer membrane and a domain that is translocated into the extracellular space, but the method can be used to study many different assembly processes and biological pathways in both prokaryotic and eukaryotic cells. In principle interacting factors and even specific residues of interacting factors that bind to a protein of interest can be identified by mass spectrometry.
Immunology, Issue 82, Autotransporters, Bam complex, Molecular chaperones, protein-protein interactions, Site-specific photocrosslinking
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics
), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP
in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
A High Throughput MHC II Binding Assay for Quantitative Analysis of Peptide Epitopes
Institutions: Dartmouth College, University of Rhode Island, Dartmouth College.
Biochemical assays with recombinant human MHC II molecules can provide rapid, quantitative insights into immunogenic epitope identification, deletion, or design1,2
. Here, a peptide-MHC II binding assay is scaled to 384-well format. The scaled down protocol reduces reagent costs by 75% and is higher throughput than previously described 96-well protocols1,3-5
. Specifically, the experimental design permits robust and reproducible analysis of up to 15 peptides against one MHC II allele per 384-well ELISA plate. Using a single liquid handling robot, this method allows one researcher to analyze approximately ninety test peptides in triplicate over a range of eight concentrations and four MHC II allele types in less than 48 hr. Others working in the fields of protein deimmunization or vaccine design and development may find the protocol to be useful in facilitating their own work. In particular, the step-by-step instructions and the visual format of JoVE should allow other users to quickly and easily establish this methodology in their own labs.
Biochemistry, Issue 85, Immunoassay, Protein Immunogenicity, MHC II, T cell epitope, High Throughput Screen, Deimmunization, Vaccine Design
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro
. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro
replication of HIV-1 as influenced by the gag
gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag
gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro
replication of chronically derived gag-pro
sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
Non-chromatographic Purification of Recombinant Elastin-like Polypeptides and their Fusions with Peptides and Proteins from Escherichia coli
Institutions: Duke University, Duke University.
Elastin-like polypeptides are repetitive biopolymers that exhibit a lower critical solution temperature phase transition behavior, existing as soluble unimers below a characteristic transition temperature and aggregating into micron-scale coacervates above their transition temperature. The design of elastin-like polypeptides at the genetic level permits precise control of their sequence and length, which dictates their thermal properties. Elastin-like polypeptides are used in a variety of applications including biosensing, tissue engineering, and drug delivery, where the transition temperature and biopolymer architecture of the ELP can be tuned for the specific application of interest. Furthermore, the lower critical solution temperature phase transition behavior of elastin-like polypeptides allows their purification by their thermal response, such that their selective coacervation and resolubilization allows the removal of both soluble and insoluble contaminants following expression in Escherichia coli
. This approach can be used for the purification of elastin-like polypeptides alone or as a purification tool for peptide or protein fusions where recombinant peptides or proteins genetically appended to elastin-like polypeptide tags can be purified without chromatography. This protocol describes the purification of elastin-like polypeptides and their peptide or protein fusions and discusses basic characterization techniques to assess the thermal behavior of pure elastin-like polypeptide products.
Molecular Biology, Issue 88, elastin-like polypeptides, lower critical solution temperature, phase separation, inverse transition cycling, protein purification, batch purification
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Identifying Protein-protein Interaction Sites Using Peptide Arrays
Institutions: The Hebrew University of Jerusalem.
Protein-protein interactions mediate most of the processes in the living cell and control homeostasis of the organism. Impaired protein interactions may result in disease, making protein interactions important drug targets. It is thus highly important to understand these interactions at the molecular level. Protein interactions are studied using a variety of techniques ranging from cellular and biochemical assays to quantitative biophysical assays, and these may be performed either with full-length proteins, with protein domains or with peptides. Peptides serve as excellent tools to study protein interactions since peptides can be easily synthesized and allow the focusing on specific interaction sites. Peptide arrays enable the identification of the interaction sites between two proteins as well as screening for peptides that bind the target protein for therapeutic purposes. They also allow high throughput SAR studies. For identification of binding sites, a typical peptide array usually contains partly overlapping 10-20 residues peptides derived from the full sequences of one or more partner proteins of the desired target protein. Screening the array for binding the target protein reveals the binding peptides, corresponding to the binding sites in the partner proteins, in an easy and fast method using only small amount of protein.
In this article we describe a protocol for screening peptide arrays for mapping the interaction sites between a target protein and its partners. The peptide array is designed based on the sequences of the partner proteins taking into account their secondary structures. The arrays used in this protocol were Celluspots arrays prepared by INTAVIS Bioanalytical Instruments. The array is blocked to prevent unspecific binding and then incubated with the studied protein. Detection using an antibody reveals the binding peptides corresponding to the specific interaction sites between the proteins.
Molecular Biology, Issue 93, peptides, peptide arrays, protein-protein interactions, binding sites, peptide synthesis, micro-arrays
Inhibitory Synapse Formation in a Co-culture Model Incorporating GABAergic Medium Spiny Neurons and HEK293 Cells Stably Expressing GABAA Receptors
Institutions: University College London.
Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA
Rs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials.
During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAA
Rs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other.
To elucidate the underlying molecular mechanisms, a novel in vitro
co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAA
R subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAA
R subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro
model system can be used to reproduce, at least in part, the in vivo
conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAA
Rs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts.
Neuroscience, Issue 93, Developmental neuroscience, synaptogenesis, synaptic inhibition, co-culture, stable cell lines, GABAergic, medium spiny neurons, HEK 293 cell line
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Thermodynamics of Membrane Protein Folding Measured by Fluorescence Spectroscopy
Institutions: University of California San Diego - UCSD.
Membrane protein folding is an emerging topic with both fundamental and health-related significance. The abundance of membrane proteins in cells underlies the need for comprehensive study of the folding of this ubiquitous family of proteins. Additionally, advances in our ability to characterize diseases associated with misfolded proteins have motivated significant experimental and theoretical efforts in the field of protein folding. Rapid progress in this important field is unfortunately hindered by the inherent challenges associated with membrane proteins and the complexity of the folding mechanism. Here, we outline an experimental procedure for measuring the thermodynamic property of the Gibbs free energy of unfolding in the absence of denaturant, ΔG°H2O
, for a representative integral membrane protein from E. coli
. This protocol focuses on the application of fluorescence spectroscopy to determine equilibrium populations of folded and unfolded states as a function of denaturant concentration. Experimental considerations for the preparation of synthetic lipid vesicles as well as key steps in the data analysis procedure are highlighted. This technique is versatile and may be pursued with different types of denaturant, including temperature and pH, as well as in various folding environments of lipids and micelles. The current protocol is one that can be generalized to any membrane or soluble protein that meets the set of criteria discussed below.
Bioengineering, Issue 50, tryptophan, peptides, Gibbs free energy, protein stability, vesicles
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Determination of the Gas-phase Acidities of Oligopeptides
Institutions: University of the Pacific.
Amino acid residues located at different positions in folded proteins often exhibit different degrees of acidities. For example, a cysteine residue located at or near the N-terminus of a helix is often more acidic than that at or near the C-terminus 1-6
. Although extensive experimental studies on the acid-base properties of peptides have been carried out in the condensed phase, in particular in aqueous solutions 6-8
, the results are often complicated by solvent effects 7
. In fact, most of the active sites in proteins are located near the interior region where solvent effects have been minimized 9,10
. In order to understand intrinsic acid-base properties of peptides and proteins, it is important to perform the studies in a solvent-free environment.
We present a method to measure the acidities of oligopeptides in the gas-phase. We use a cysteine-containing oligopeptide, Ala3
CH), as the model compound. The measurements are based on the well-established extended Cooks kinetic method (Figure 1
. The experiments are carried out using a triple-quadrupole mass spectrometer interfaced with an electrospray ionization (ESI) ion source (Figure 2
). For each peptide sample, several reference acids are selected. The reference acids are structurally similar organic compounds with known gas-phase acidities. A solution of the mixture of the peptide and a reference acid is introduced into the mass spectrometer, and a gas-phase proton-bound anionic cluster of peptide-reference acid is formed. The proton-bound cluster is mass isolated and subsequently fragmented via collision-induced dissociation (CID) experiments. The resulting fragment ion abundances are analyzed using a relationship between the acidities and the cluster ion dissociation kinetics. The gas-phase acidity of the peptide is then obtained by linear regression of the thermo-kinetic plots 17,18
The method can be applied to a variety of molecular systems, including organic compounds, amino acids and their derivatives, oligonucleotides, and oligopeptides. By comparing the gas-phase acidities measured experimentally with those values calculated for different conformers, conformational effects on the acidities can be evaluated.
Chemistry, Issue 76, Biochemistry, Molecular Biology, Oligopeptide, gas-phase acidity, kinetic method, collision-induced dissociation, triple-quadrupole mass spectrometry, oligopeptides, peptides, mass spectrometry, MS
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1
H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g.
crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
Designing Silk-silk Protein Alloy Materials for Biomedical Applications
Institutions: Rowan University, Rowan University, Cooper Medical School of Rowan University, Rowan University.
Fibrous proteins display different sequences and structures that have been used for various applications in biomedical fields such as biosensors, nanomedicine, tissue regeneration, and drug delivery. Designing materials based on the molecular-scale interactions between these proteins will help generate new multifunctional protein alloy biomaterials with tunable properties. Such alloy material systems also provide advantages in comparison to traditional synthetic polymers due to the materials biodegradability, biocompatibility, and tenability in the body. This article used the protein blends of wild tussah silk (Antheraea pernyi
) and domestic mulberry silk (Bombyx mori
) as an example to provide useful protocols regarding these topics, including how to predict protein-protein interactions by computational methods, how to produce protein alloy solutions, how to verify alloy systems by thermal analysis, and how to fabricate variable alloy materials including optical materials with diffraction gratings, electric materials with circuits coatings, and pharmaceutical materials for drug release and delivery. These methods can provide important information for designing the next generation multifunctional biomaterials based on different protein alloys.
Bioengineering, Issue 90, protein alloys, biomaterials, biomedical, silk blends, computational simulation, implantable electronic devices
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
MALDI Sample Preparation: the Ultra Thin Layer Method
Institutions: Rockefeller University.
This video demonstrates the preparation of an ultra-thin matrix/analyte layer for analyzing peptides and proteins by Matrix-Assisted Laser Desorption Ionization Mass Spectrometry (MALDI-MS) 1,2
. The ultra-thin layer method involves the production of a substrate layer of matrix crystals (alpha-cyano-4-hydroxycinnamic acid) on the sample plate, which serves as a seeding ground for subsequent crystallization of a matrix/analyte mixture. Advantages of the ultra-thin layer method over other sample deposition approaches (e.g.
dried droplet) are that it provides (i) greater tolerance to impurities such as salts and detergents, (ii) better resolution, and (iii) higher spatial uniformity. This method is especially useful for the accurate mass determination of proteins. The protocol was initially developed and optimized for the analysis of membrane proteins and used to successfully analyze ion channels, metabolite transporters, and receptors, containing between 2 and 12 transmembrane domains 2
. Since the original publication, it has also shown to be equally useful for the analysis of soluble proteins. Indeed, we have used it for a large number of proteins having a wide range of properties, including those with molecular masses as high as 380 kDa 3
. It is currently our method of choice for the molecular mass analysis of all proteins. The described procedure consistently produces high-quality spectra, and it is sensitive, robust, and easy to implement.
Cellular Biology, Issue 3, mass-spectrometry, ultra-thin layer, MALDI, MS, proteins