JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Prediction of thermostability from amino acid attributes by combination of clustering with attribute weighting: a new vista in engineering enzymes.
PLoS ONE
PUBLISHED: 04-13-2011
The engineering of thermostable enzymes is receiving increased attention. The paper, detergent, and biofuel industries, in particular, seek to use environmentally friendly enzymes instead of toxic chlorine chemicals. Enzymes typically function at temperatures below 60°C and denature if exposed to higher temperatures. In contrast, a small portion of enzymes can withstand higher temperatures as a result of various structural adaptations. Understanding the protein attributes that are involved in this adaptation is the first step toward engineering thermostable enzymes. We employed various supervised and unsupervised machine learning algorithms as well as attribute weighting approaches to find amino acid composition attributes that contribute to enzyme thermostability. Specifically, we compared two groups of enzymes: mesostable and thermostable enzymes. Furthermore, a combination of attribute weighting with supervised and unsupervised clustering algorithms was used for prediction and modelling of protein thermostability from amino acid composition properties. Mining a large number of protein sequences (2090) through a variety of machine learning algorithms, which were based on the analysis of more than 800 amino acid attributes, increased the accuracy of this study. Moreover, these models were successful in predicting thermostability from the primary structure of proteins. The results showed that expectation maximization clustering in combination with uncertainly and correlation attribute weighting algorithms can effectively (100%) classify thermostable and mesostable proteins. Seventy per cent of the weighting methods selected Gln content and frequency of hydrophilic residues as the most important protein attributes. On the dipeptide level, the frequency of Asn-Glu was the key factor in distinguishing mesostable from thermostable enzymes. This study demonstrates the feasibility of predicting thermostability irrespective of sequence similarity and will serve as a basis for engineering thermostable enzymes in the laboratory.
Authors: Ryan D. Heselpoth, Daniel C. Nelson.
Published: 11-07-2012
ABSTRACT
Directed evolution is defined as a method to harness natural selection in order to engineer proteins to acquire particular properties that are not associated with the protein in nature. Literature has provided numerous examples regarding the implementation of directed evolution to successfully alter molecular specificity and catalysis1. The primary advantage of utilizing directed evolution instead of more rational-based approaches for molecular engineering relates to the volume and diversity of variants that can be screened2. One possible application of directed evolution involves improving structural stability of bacteriolytic enzymes, such as endolysins. Bacteriophage encode and express endolysins to hydrolyze a critical covalent bond in the peptidoglycan (i.e. cell wall) of bacteria, resulting in host cell lysis and liberation of progeny virions. Notably, these enzymes possess the ability to extrinsically induce lysis to susceptible bacteria in the absence of phage and furthermore have been validated both in vitro and in vivo for their therapeutic potential3-5. The subject of our directed evolution study involves the PlyC endolysin, which is composed of PlyCA and PlyCB subunits6. When purified and added extrinsically, the PlyC holoenzyme lyses group A streptococci (GAS) as well as other streptococcal groups in a matter of seconds and furthermore has been validated in vivo against GAS7. Significantly, monitoring residual enzyme kinetics after elevated temperature incubation provides distinct evidence that PlyC loses lytic activity abruptly at 45 °C, suggesting a short therapeutic shelf life, which may limit additional development of this enzyme. Further studies reveal the lack of thermal stability is only observed for the PlyCA subunit, whereas the PlyCB subunit is stable up to ~90 °C (unpublished observation). In addition to PlyC, there are several examples in literature that describe the thermolabile nature of endolysins. For example, the Staphylococcus aureus endolysin LysK and Streptococcus pneumoniae endolysins Cpl-1 and Pal lose activity spontaneously at 42 °C, 43.5 °C and 50.2 °C, respectively8-10. According to the Arrhenius equation, which relates the rate of a chemical reaction to the temperature present in the particular system, an increase in thermostability will correlate with an increase in shelf life expectancy11. Toward this end, directed evolution has been shown to be a useful tool for altering the thermal activity of various molecules in nature, but never has this particular technology been exploited successfully for the study of bacteriolytic enzymes. Likewise, successful accounts of progressing the structural stability of this particular class of antimicrobials altogether are nonexistent. In this video, we employ a novel methodology that uses an error-prone DNA polymerase followed by an optimized screening process using a 96 well microtiter plate format to identify mutations to the PlyCA subunit of the PlyC streptococcal endolysin that correlate to an increase in enzyme kinetic stability (Figure 1). Results after just one round of random mutagenesis suggest the methodology is generating PlyC variants that retain more than twice the residual activity when compared to wild-type (WT) PlyC after elevated temperature treatment.
21 Related JoVE Articles!
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Hydrophobic Salt-modified Nafion for Enzyme Immobilization and Stabilization
Authors: Shannon Meredith, Shuai Xu, Matthew T. Meredith, Shelley D. Minteer.
Institutions: University of Utah .
Over the last decade, there has been a wealth of application for immobilized and stabilized enzymes including biocatalysis, biosensors, and biofuel cells.1-3 In most bioelectrochemical applications, enzymes or organelles are immobilized onto an electrode surface with the use of some type of polymer matrix. This polymer scaffold should keep the enzymes stable and allow for the facile diffusion of molecules and ions in and out of the matrix. Most polymers used for this type of immobilization are based on polyamines or polyalcohols - polymers that mimic the natural environment of the enzymes that they encapsulate and stabilize the enzyme through hydrogen or ionic bonding. Another method for stabilizing enzymes involves the use of micelles, which contain hydrophobic regions that can encapsulate and stabilize enzymes.4,5 In particular, the Minteer group has developed a micellar polymer based on commercially available Nafion.6,7 Nafion itself is a micellar polymer that allows for the channel-assisted diffusion of protons and other small cations, but the micelles and channels are extremely small and the polymer is very acidic due to sulfonic acid side chains, which is unfavorable for enzyme immobilization. However, when Nafion is mixed with an excess of hydrophobic alkyl ammonium salts such as tetrabutylammonium bromide (TBAB), the quaternary ammonium cations replace the protons and become the counter ions to the sulfonate groups on the polymer side chains (Figure 1). This results in larger micelles and channels within the polymer that allow for the diffusion of large substrates and ions that are necessary for enzymatic function such as nicotinamide adenine dinucleotide (NAD). This modified Nafion polymer has been used to immobilize many different types of enzymes as well as mitochondria for use in biosensors and biofuel cells.8-12 This paper describes a novel procedure for making this micellar polymer enzyme immobilization membrane that can stabilize enzymes. The synthesis of the micellar enzyme immobilization membrane, the procedure for immobilizing enzymes within the membrane, and the assays for studying enzymatic specific activity of the immobilized enzyme are detailed below.
Bioengineering, Issue 65, Materials Science, Chemical Engineering, enzyme immobilization, polymer modification, Nafion, enzyme stabilization, enzyme activity assays
3949
Play Button
High Throughput Screening of Fungal Endoglucanase Activity in Escherichia coli
Authors: Mary F. Farrow, Frances H. Arnold.
Institutions: California Institute of Technology, California Institute of Technology.
Cellulase enzymes (endoglucanases, cellobiohydrolases, and β-glucosidases) hydrolyze cellulose into component sugars, which in turn can be converted into fuel alcohols1. The potential for enzymatic hydrolysis of cellulosic biomass to provide renewable energy has intensified efforts to engineer cellulases for economical fuel production2. Of particular interest are fungal cellulases3-8, which are already being used industrially for foods and textiles processing. Identifying active variants among a library of mutant cellulases is critical to the engineering process; active mutants can be further tested for improved properties and/or subjected to additional mutagenesis. Efficient engineering of fungal cellulases has been hampered by a lack of genetic tools for native organisms and by difficulties in expressing the enzymes in heterologous hosts. Recently, Morikawa and coworkers developed a method for expressing in E. coli the catalytic domains of endoglucanases from H. jecorina3,9, an important industrial fungus with the capacity to secrete cellulases in large quantities. Functional E. coli expression has also been reported for cellulases from other fungi, including Macrophomina phaseolina10 and Phanerochaete chrysosporium11-12. We present a method for high throughput screening of fungal endoglucanase activity in E. coli. (Fig 1) This method uses the common microbial dye Congo Red (CR) to visualize enzymatic degradation of carboxymethyl cellulose (CMC) by cells growing on solid medium. The activity assay requires inexpensive reagents, minimal manipulation, and gives unambiguous results as zones of degradation (“halos”) at the colony site. Although a quantitative measure of enzymatic activity cannot be determined by this method, we have found that halo size correlates with total enzymatic activity in the cell. Further characterization of individual positive clones will determine , relative protein fitness. Traditional bacterial whole cell CMC/CR activity assays13 involve pouring agar containing CMC onto colonies, which is subject to cross-contamination, or incubating cultures in CMC agar wells, which is less amenable to large-scale experimentation. Here we report an improved protocol that modifies existing wash methods14 for cellulase activity: cells grown on CMC agar plates are removed prior to CR staining. Our protocol significantly reduces cross-contamination and is highly scalable, allowing the rapid screening of thousands of clones. In addition to H. jecorina enzymes, we have expressed and screened endoglucanase variants from the Thermoascus aurantiacus and Penicillium decumbens (shown in Figure 2), suggesting that this protocol is applicable to enzymes from a range of organisms.
Molecular Biology, Issue 54, cellulase, endoglucanase, CMC, Congo Red
2942
Play Button
GENPLAT: an Automated Platform for Biomass Enzyme Discovery and Cocktail Optimization
Authors: Jonathan Walton, Goutami Banerjee, Suzana Car.
Institutions: Michigan State University, Michigan State University.
The high cost of enzymes for biomass deconstruction is a major impediment to the economic conversion of lignocellulosic feedstocks to liquid transportation fuels such as ethanol. We have developed an integrated high throughput platform, called GENPLAT, for the discovery and development of novel enzymes and enzyme cocktails for the release of sugars from diverse pretreatment/biomass combinations. GENPLAT comprises four elements: individual pure enzymes, statistical design of experiments, robotic pipeting of biomass slurries and enzymes, and automated colorimeteric determination of released Glc and Xyl. Individual enzymes are produced by expression in Pichia pastoris or Trichoderma reesei, or by chromatographic purification from commercial cocktails or from extracts of novel microorganisms. Simplex lattice (fractional factorial) mixture models are designed using commercial Design of Experiment statistical software. Enzyme mixtures of high complexity are constructed using robotic pipeting into a 96-well format. The measurement of released Glc and Xyl is automated using enzyme-linked colorimetric assays. Optimized enzyme mixtures containing as many as 16 components have been tested on a variety of feedstock and pretreatment combinations. GENPLAT is adaptable to mixtures of pure enzymes, mixtures of commercial products (e.g., Accellerase 1000 and Novozyme 188), extracts of novel microbes, or combinations thereof. To make and test mixtures of ˜10 pure enzymes requires less than 100 μg of each protein and fewer than 100 total reactions, when operated at a final total loading of 15 mg protein/g glucan. We use enzymes from several sources. Enzymes can be purified from natural sources such as fungal cultures (e.g., Aspergillus niger, Cochliobolus carbonum, and Galerina marginata), or they can be made by expression of the encoding genes (obtained from the increasing number of microbial genome sequences) in hosts such as E. coli, Pichia pastoris, or a filamentous fungus such as T. reesei. Proteins can also be purified from commercial enzyme cocktails (e.g., Multifect Xylanase, Novozyme 188). An increasing number of pure enzymes, including glycosyl hydrolases, cell wall-active esterases, proteases, and lyases, are available from commercial sources, e.g., Megazyme, Inc. (www.megazyme.com), NZYTech (www.nzytech.com), and PROZOMIX (www.prozomix.com). Design-Expert software (Stat-Ease, Inc.) is used to create simplex-lattice designs and to analyze responses (in this case, Glc and Xyl release). Mixtures contain 4-20 components, which can vary in proportion between 0 and 100%. Assay points typically include the extreme vertices with a sufficient number of intervening points to generate a valid model. In the terminology of experimental design, most of our studies are "mixture" experiments, meaning that the sum of all components adds to a total fixed protein loading (expressed as mg/g glucan). The number of mixtures in the simplex-lattice depends on both the number of components in the mixture and the degree of polynomial (quadratic or cubic). For example, a 6-component experiment will entail 63 separate reactions with an augmented special cubic model, which can detect three-way interactions, whereas only 23 individual reactions are necessary with an augmented quadratic model. For mixtures containing more than eight components, a quadratic experimental design is more practical, and in our experience such models are usually statistically valid. All enzyme loadings are expressed as a percentage of the final total loading (which for our experiments is typically 15 mg protein/g glucan). For "core" enzymes, the lower percentage limit is set to 5%. This limit was derived from our experience in which yields of Glc and/or Xyl were very low if any core enzyme was present at 0%. Poor models result from too many samples showing very low Glc or Xyl yields. Setting a lower limit in turn determines an upper limit. That is, for a six-component experiment, if the lower limit for each single component is set to 5%, then the upper limit of each single component will be 75%. The lower limits of all other enzymes considered as "accessory" are set to 0%. "Core" and "accessory" are somewhat arbitrary designations and will differ depending on the substrate, but in our studies the core enzymes for release of Glc from corn stover comprise the following enzymes from T. reesei: CBH1 (also known as Cel7A), CBH2 (Cel6A), EG1(Cel7B), BG (β-glucosidase), EX3 (endo-β1,4-xylanase, GH10), and BX (β-xylosidase).
Bioengineering, Issue 56, cellulase, cellobiohydrolase, glucanase, xylanase, hemicellulase, experimental design, biomass, bioenergy, corn stover, glycosyl hydrolase
3314
Play Button
Application of MassSQUIRM for Quantitative Measurements of Lysine Demethylase Activity
Authors: Lauren P. Blair, Nathan L. Avaritt, Alan J. Tackett.
Institutions: University of Arkansas for Medical Sciences .
Recently, epigenetic regulators have been discovered as key players in many different diseases 1-3. As a result, these enzymes are prime targets for small molecule studies and drug development 4. Many epigenetic regulators have only recently been discovered and are still in the process of being classified. Among these enzymes are lysine demethylases which remove methyl groups from lysines on histones and other proteins. Due to the novel nature of this class of enzymes, few assays have been developed to study their activity. This has been a road block to both the classification and high throughput study of histone demethylases. Currently, very few demethylase assays exist. Those that do exist tend to be qualitative in nature and cannot simultaneously discern between the different lysine methylation states (un-, mono-, di- and tri-). Mass spectrometry is commonly used to determine demethylase activity but current mass spectrometric assays do not address whether differentially methylated peptides ionize differently. Differential ionization of methylated peptides makes comparing methylation states difficult and certainly not quantitative (Figure 1A). Thus available assays are not optimized for the comprehensive analysis of demethylase activity. Here we describe a method called MassSQUIRM (mass spectrometric quantitation using isotopic reductive methylation) that is based on reductive methylation of amine groups with deuterated formaldehyde to force all lysines to be di-methylated, thus making them essentially the same chemical species and therefore ionize the same (Figure 1B). The only chemical difference following the reductive methylation is hydrogen and deuterium, which does not affect MALDI ionization efficiencies. The MassSQUIRM assay is specific for demethylase reaction products with un-, mono- or di-methylated lysines. The assay is also applicable to lysine methyltransferases giving the same reaction products. Here, we use a combination of reductive methylation chemistry and MALDI mass spectrometry to measure the activity of LSD1, a lysine demethylase capable of removing di- and mono-methyl groups, on a synthetic peptide substrate 5. This assay is simple and easily amenable to any lab with access to a MALDI mass spectrometer in lab or through a proteomics facility. The assay has ~8-fold dynamic range and is readily scalable to plate format 5.
Molecular Biology, Issue 61, LSD1, lysine demethylase, mass spectrometry, reductive methylation, demethylase quantification
3604
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology
Authors: Sophie Imbeault, Nicolas Valenzuela, Stephen Fai, Steffany A.L. Bennett.
Institutions: University of Ottawa, Carleton University.
The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This methodology enables visualization and analysis of the cellular position of target proteins and cells throughout the entire 3D culture topography and will facilitate a more detailed analysis of the spatial relationships between cells over the course of neurogenesis and gliogenesis in vitro. Both Imbeault and Valenzuela contributed equally and should be considered joint first authors.
Neuroscience, Issue 46, neural stem cell, hippocampus, cryosectioning, 3D modelling, neurosphere, Maya, compositing
2483
Play Button
Models and Methods to Evaluate Transport of Drug Delivery Systems Across Cellular Barriers
Authors: Rasa Ghaffarian, Silvia Muro.
Institutions: University of Maryland, University of Maryland.
Sub-micrometer carriers (nanocarriers; NCs) enhance efficacy of drugs by improving solubility, stability, circulation time, targeting, and release. Additionally, traversing cellular barriers in the body is crucial for both oral delivery of therapeutic NCs into the circulation and transport from the blood into tissues, where intervention is needed. NC transport across cellular barriers is achieved by: (i) the paracellular route, via transient disruption of the junctions that interlock adjacent cells, or (ii) the transcellular route, where materials are internalized by endocytosis, transported across the cell body, and secreted at the opposite cell surface (transyctosis). Delivery across cellular barriers can be facilitated by coupling therapeutics or their carriers with targeting agents that bind specifically to cell-surface markers involved in transport. Here, we provide methods to measure the extent and mechanism of NC transport across a model cell barrier, which consists of a monolayer of gastrointestinal (GI) epithelial cells grown on a porous membrane located in a transwell insert. Formation of a permeability barrier is confirmed by measuring transepithelial electrical resistance (TEER), transepithelial transport of a control substance, and immunostaining of tight junctions. As an example, ~200 nm polymer NCs are used, which carry a therapeutic cargo and are coated with an antibody that targets a cell-surface determinant. The antibody or therapeutic cargo is labeled with 125I for radioisotope tracing and labeled NCs are added to the upper chamber over the cell monolayer for varying periods of time. NCs associated to the cells and/or transported to the underlying chamber can be detected. Measurement of free 125I allows subtraction of the degraded fraction. The paracellular route is assessed by determining potential changes caused by NC transport to the barrier parameters described above. Transcellular transport is determined by addressing the effect of modulating endocytosis and transcytosis pathways.
Bioengineering, Issue 80, Antigens, Enzymes, Biological Therapy, bioengineering (general), Pharmaceutical Preparations, Macromolecular Substances, Therapeutics, Digestive System and Oral Physiological Phenomena, Biological Phenomena, Cell Physiological Phenomena, drug delivery systems, targeted nanocarriers, transcellular transport, epithelial cells, tight junctions, transepithelial electrical resistance, endocytosis, transcytosis, radioisotope tracing, immunostaining
50638
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Specificity Analysis of Protein Lysine Methyltransferases Using SPOT Peptide Arrays
Authors: Srikanth Kudithipudi, Denis Kusevic, Sara Weirich, Albert Jeltsch.
Institutions: Stuttgart University.
Lysine methylation is an emerging post-translation modification and it has been identified on several histone and non-histone proteins, where it plays crucial roles in cell development and many diseases. Approximately 5,000 lysine methylation sites were identified on different proteins, which are set by few dozens of protein lysine methyltransferases. This suggests that each PKMT methylates multiple proteins, however till now only one or two substrates have been identified for several of these enzymes. To approach this problem, we have introduced peptide array based substrate specificity analyses of PKMTs. Peptide arrays are powerful tools to characterize the specificity of PKMTs because methylation of several substrates with different sequences can be tested on one array. We synthesized peptide arrays on cellulose membrane using an Intavis SPOT synthesizer and analyzed the specificity of various PKMTs. Based on the results, for several of these enzymes, novel substrates could be identified. For example, for NSD1 by employing peptide arrays, we showed that it methylates K44 of H4 instead of the reported H4K20 and in addition H1.5K168 is the highly preferred substrate over the previously known H3K36. Hence, peptide arrays are powerful tools to biochemically characterize the PKMTs.
Biochemistry, Issue 93, Peptide arrays, solid phase peptide synthesis, SPOT synthesis, protein lysine methyltransferases, substrate specificity profile analysis, lysine methylation
52203
Play Button
Microwave-assisted Functionalization of Poly(ethylene glycol) and On-resin Peptides for Use in Chain Polymerizations and Hydrogel Formation
Authors: Amy H. Van Hove, Brandon D. Wilson, Danielle S. W. Benoit.
Institutions: University of Rochester, University of Rochester, University of Rochester Medical Center.
One of the main benefits to using poly(ethylene glycol) (PEG) macromers in hydrogel formation is synthetic versatility. The ability to draw from a large variety of PEG molecular weights and configurations (arm number, arm length, and branching pattern) affords researchers tight control over resulting hydrogel structures and properties, including Young’s modulus and mesh size. This video will illustrate a rapid, efficient, solvent-free, microwave-assisted method to methacrylate PEG precursors into poly(ethylene glycol) dimethacrylate (PEGDM). This synthetic method provides much-needed starting materials for applications in drug delivery and regenerative medicine. The demonstrated method is superior to traditional methacrylation methods as it is significantly faster and simpler, as well as more economical and environmentally friendly, using smaller amounts of reagents and solvents. We will also demonstrate an adaptation of this technique for on-resin methacrylamide functionalization of peptides. This on-resin method allows the N-terminus of peptides to be functionalized with methacrylamide groups prior to deprotection and cleavage from resin. This allows for selective addition of methacrylamide groups to the N-termini of the peptides while amino acids with reactive side groups (e.g. primary amine of lysine, primary alcohol of serine, secondary alcohols of threonine, and phenol of tyrosine) remain protected, preventing functionalization at multiple sites. This article will detail common analytical methods (proton Nuclear Magnetic Resonance spectroscopy (;H-NMR) and Matrix Assisted Laser Desorption Ionization Time of Flight mass spectrometry (MALDI-ToF)) to assess the efficiency of the functionalizations. Common pitfalls and suggested troubleshooting methods will be addressed, as will modifications of the technique which can be used to further tune macromer functionality and resulting hydrogel physical and chemical properties. Use of synthesized products for the formation of hydrogels for drug delivery and cell-material interaction studies will be demonstrated, with particular attention paid to modifying hydrogel composition to affect mesh size, controlling hydrogel stiffness and drug release.
Chemistry, Issue 80, Poly(ethylene glycol), peptides, polymerization, polymers, methacrylation, peptide functionalization, 1H-NMR, MALDI-ToF, hydrogels, macromer synthesis
50890
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
50961
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Molecular Evolution of the Tre Recombinase
Authors: Frank Buchholz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells. We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution
791
Play Button
Detection of Protein Ubiquitination
Authors: Yeun Su Choo, Zhuohua Zhang.
Institutions: The Sanford Burnham Institute for Medical Research.
Ubiquitination, the covalent attachment of the polypeptide ubiquitin to target proteins, is a key posttranslational modification carried out by a set of three enzymes. They include ubiquitin-activating enzyme E1, ubiquitin-conjugating enzyme E2, and ubiquitin ligase E3. Unlike to E1 and E2, E3 ubiquitin ligases display substrate specificity. On the other hand, numerous deubiquitylating enzymes have roles in processing polyubiquitinated proteins. Ubiquitination can result in change of protein stability, cellular localization, and biological activity. Mutations of genes involved in the ubiquitination/deubiquitination pathway or altered ubiquitin system function are associated with many different human diseases such as various types of cancer, neurodegeneration, and metabolic disorders. The detection of altered or normal ubiquitination of target proteins may provide a better understanding on the pathogenesis of these diseases.  Here, we describe protocols to detect protein ubiquitination in cultured cells in vivo and test tubes in vitro. These protocols are also useful to detect other ubiquitin-like small molecule modification such as sumolyation and neddylation.
Cell Biology, Biochemistry, Issue 30, ubiquitination, cultured cell, in vitro system, immunoprecipitation, immunoblotting, ubiquitin, posttranslational modification
1293
Play Button
Interview: HIV-1 Proviral DNA Excision Using an Evolved Recombinase
Authors: Joachim Hauber.
Institutions: Heinrich-Pette-Institute for Experimental Virology and Immunology, University of Hamburg.
HIV-1 integrates into the host chromosome of infected cells and persists as a provirus flanked by long terminal repeats. Current treatment strategies primarily target virus enzymes or virus-cell fusion, suppressing the viral life cycle without eradicating the infection. Since the integrated provirus is not targeted by these approaches, new resistant strains of HIV-1 may emerge. Here, we report that the engineered recombinase Tre (see Molecular evolution of the Tre recombinase , Buchholz, F., Max Planck Institute for Cell Biology and Genetics, Dresden) efficiently excises integrated HIV-1 proviral DNA from the genome of infected cells. We produced loxLTR containing viral pseudotypes and infected HeLa cells to examine whether Tre recombinase can excise the provirus from the genome of HIV-1 infected human cells. A virus particle-releasing cell line was cloned and transfected with a plasmid expressing Tre or with a parental control vector. Recombinase activity and virus production were monitored. All assays demonstrated the efficient deletion of the provirus from infected cells without visible cytotoxic effects. These results serve as proof of principle that it is possible to evolve a recombinase to specifically target an HIV-1 LTR and that this recombinase is capable of excising the HIV-1 provirus from the genome of HIV-1-infected human cells. Before an engineered recombinase could enter the therapeutic arena, however, significant obstacles need to be overcome. Among the most critical issues, that we face, are an efficient and safe delivery to targeted cells and the absence of side effects.
Medicine, Issue 16, HIV, Cell Biology, Recombinase, provirus, HeLa Cells
793
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.