JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
The optimal exponent base for emPAI is 6.5.
PLoS ONE
Exponentially Modified Protein Abundance Index (emPAI) is an established method of estimating protein abundances from peptide counts in a single LC-MS/MS experiment. EmPAI is defined as 10(PAI) minus one, where PAI (Protein Abundance Index) denotes the ratio of observed to observable peptides. EmPAI was first proposed by Ishihama et al [1] who found that PAI is approximately proportional to the logarithm of absolute protein concentration. I define emPAI65 = 6.5(PAI)-1 and show that it performs significantly better than emPAI, while it is equally easy to compute. The higher accuracy of emPAI65 is demonstrated by analyzing three data sets, including the one used in the original study [1]. I conclude that emPAI65 ought to be used instead of the original emPAI for protein quantitation.
Authors: Andrew C. Tolonen, Wilhelm Haas.
Published: 07-01-2014
ABSTRACT
Stable isotope labeling of peptides by reductive dimethylation (ReDi labeling) is a method to accurately quantify protein expression differences between samples using mass spectrometry. ReDi labeling is performed using either regular (light) or deuterated (heavy) forms of formaldehyde and sodium cyanoborohydride to add two methyl groups to each free amine. Here we demonstrate a robust protocol for ReDi labeling and quantitative comparison of complex protein mixtures. Protein samples for comparison are digested into peptides, labeled to carry either light or heavy methyl tags, mixed, and co-analyzed by LC-MS/MS. Relative protein abundances are quantified by comparing the ion chromatogram peak areas of heavy and light labeled versions of the constituent peptide extracted from the full MS spectra. The method described here includes sample preparation by reversed-phase solid phase extraction, on-column ReDi labeling of peptides, peptide fractionation by basic pH reversed-phase (BPRP) chromatography, and StageTip peptide purification. We discuss advantages and limitations of ReDi labeling with respect to other methods for stable isotope incorporation. We highlight novel applications using ReDi labeling as a fast, inexpensive, and accurate method to compare protein abundances in nearly any type of sample.
27 Related JoVE Articles!
Play Button
Determination of the Gas-phase Acidities of Oligopeptides
Authors: Jianhua Ren, Ashish Sawhney, Yuan Tian, Bhupinder Padda, Patrick Batoon.
Institutions: University of the Pacific.
Amino acid residues located at different positions in folded proteins often exhibit different degrees of acidities. For example, a cysteine residue located at or near the N-terminus of a helix is often more acidic than that at or near the C-terminus 1-6. Although extensive experimental studies on the acid-base properties of peptides have been carried out in the condensed phase, in particular in aqueous solutions 6-8, the results are often complicated by solvent effects 7. In fact, most of the active sites in proteins are located near the interior region where solvent effects have been minimized 9,10. In order to understand intrinsic acid-base properties of peptides and proteins, it is important to perform the studies in a solvent-free environment. We present a method to measure the acidities of oligopeptides in the gas-phase. We use a cysteine-containing oligopeptide, Ala3CysNH2 (A3CH), as the model compound. The measurements are based on the well-established extended Cooks kinetic method (Figure 1) 11-16. The experiments are carried out using a triple-quadrupole mass spectrometer interfaced with an electrospray ionization (ESI) ion source (Figure 2). For each peptide sample, several reference acids are selected. The reference acids are structurally similar organic compounds with known gas-phase acidities. A solution of the mixture of the peptide and a reference acid is introduced into the mass spectrometer, and a gas-phase proton-bound anionic cluster of peptide-reference acid is formed. The proton-bound cluster is mass isolated and subsequently fragmented via collision-induced dissociation (CID) experiments. The resulting fragment ion abundances are analyzed using a relationship between the acidities and the cluster ion dissociation kinetics. The gas-phase acidity of the peptide is then obtained by linear regression of the thermo-kinetic plots 17,18. The method can be applied to a variety of molecular systems, including organic compounds, amino acids and their derivatives, oligonucleotides, and oligopeptides. By comparing the gas-phase acidities measured experimentally with those values calculated for different conformers, conformational effects on the acidities can be evaluated.
Chemistry, Issue 76, Biochemistry, Molecular Biology, Oligopeptide, gas-phase acidity, kinetic method, collision-induced dissociation, triple-quadrupole mass spectrometry, oligopeptides, peptides, mass spectrometry, MS
4348
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Authors: Andreas Ender, Albert Mehl.
Institutions: University of Zürich.
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes.
Medicine, Issue 86, Laboratories, Dental, Calibration, Technology, Dental impression, Accuracy, Trueness, Precision, Full arch scan, Abrasion
51374
Play Button
Isolation of Fidelity Variants of RNA Viruses and Characterization of Virus Mutation Frequency
Authors: Stéphanie Beaucourt, Antonio V. Bordería, Lark L. Coffey, Nina F. Gnädig, Marta Sanz-Ramos, Yasnee Beeharry, Marco Vignuzzi.
Institutions: Institut Pasteur .
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7.
Immunology, Issue 52, Polymerase fidelity, RNA virus, mutation frequency, mutagen, RNA polymerase, viral evolution
2953
Play Button
Generation of Enterobacter sp. YSU Auxotrophs Using Transposon Mutagenesis
Authors: Jonathan James Caguiat.
Institutions: Youngstown State University.
Prototrophic bacteria grow on M-9 minimal salts medium supplemented with glucose (M-9 medium), which is used as a carbon and energy source. Auxotrophs can be generated using a transposome. The commercially available, Tn5-derived transposome used in this protocol consists of a linear segment of DNA containing an R6Kγ replication origin, a gene for kanamycin resistance and two mosaic sequence ends, which serve as transposase binding sites. The transposome, provided as a DNA/transposase protein complex, is introduced by electroporation into the prototrophic strain, Enterobacter sp. YSU, and randomly incorporates itself into this host’s genome. Transformants are replica plated onto Luria-Bertani agar plates containing kanamycin, (LB-kan) and onto M-9 medium agar plates containing kanamycin (M-9-kan). The transformants that grow on LB-kan plates but not on M-9-kan plates are considered to be auxotrophs. Purified genomic DNA from an auxotroph is partially digested, ligated and transformed into a pir+ Escherichia coli (E. coli) strain. The R6Kγ replication origin allows the plasmid to replicate in pir+ E. coli strains, and the kanamycin resistance marker allows for plasmid selection. Each transformant possesses a new plasmid containing the transposon flanked by the interrupted chromosomal region. Sanger sequencing and the Basic Local Alignment Search Tool (BLAST) suggest a putative identity of the interrupted gene. There are three advantages to using this transposome mutagenesis strategy. First, it does not rely on the expression of a transposase gene by the host. Second, the transposome is introduced into the target host by electroporation, rather than by conjugation or by transduction and therefore is more efficient. Third, the R6Kγ replication origin makes it easy to identify the mutated gene which is partially recovered in a recombinant plasmid. This technique can be used to investigate the genes involved in other characteristics of Enterobacter sp. YSU or of a wider variety of bacterial strains.
Microbiology, Issue 92, Auxotroph, transposome, transposon, mutagenesis, replica plating, glucose minimal medium, complex medium, Enterobacter
51934
Play Button
A Guided Materials Screening Approach for Developing Quantitative Sol-gel Derived Protein Microarrays
Authors: Blake-Joseph Helka, John D. Brennan.
Institutions: McMaster University .
Microarrays have found use in the development of high-throughput assays for new materials and discovery of small-molecule drug leads. Herein we describe a guided material screening approach to identify sol-gel based materials that are suitable for producing three-dimensional protein microarrays. The approach first identifies materials that can be printed as microarrays, narrows down the number of materials by identifying those that are compatible with a given enzyme assay, and then hones in on optimal materials based on retention of maximum enzyme activity. This approach is applied to develop microarrays suitable for two different enzyme assays, one using acetylcholinesterase and the other using a set of four key kinases involved in cancer. In each case, it was possible to produce microarrays that could be used for quantitative small-molecule screening assays and production of dose-dependent inhibitor response curves. Importantly, the ability to screen many materials produced information on the types of materials that best suited both microarray production and retention of enzyme activity. The materials data provide insight into basic material requirements necessary for tailoring optimal, high-density sol-gel derived microarrays.
Chemistry, Issue 78, Biochemistry, Chemical Engineering, Molecular Biology, Genetics, Bioengineering, Biomedical Engineering, Chemical Biology, Biocompatible Materials, Siloxanes, Enzymes, Immobilized, chemical analysis techniques, chemistry (general), materials (general), spectroscopic analysis (chemistry), polymer matrix composites, testing of materials (composite materials), Sol-gel, microarray, high-throughput screening, acetylcholinesterase, kinase, drug discovery, assay
50689
Play Button
Best Current Practice for Obtaining High Quality EEG Data During Simultaneous fMRI
Authors: Karen J. Mullinger, Pierluigi Castellone, Richard Bowtell.
Institutions: University of Nottingham , Brain Products GmbH.
Simultaneous EEG-fMRI allows the excellent temporal resolution of EEG to be combined with the high spatial accuracy of fMRI. The data from these two modalities can be combined in a number of ways, but all rely on the acquisition of high quality EEG and fMRI data. EEG data acquired during simultaneous fMRI are affected by several artifacts, including the gradient artefact (due to the changing magnetic field gradients required for fMRI), the pulse artefact (linked to the cardiac cycle) and movement artifacts (resulting from movements in the strong magnetic field of the scanner, and muscle activity). Post-processing methods for successfully correcting the gradient and pulse artifacts require a number of criteria to be satisfied during data acquisition. Minimizing head motion during EEG-fMRI is also imperative for limiting the generation of artifacts. Interactions between the radio frequency (RF) pulses required for MRI and the EEG hardware may occur and can cause heating. This is only a significant risk if safety guidelines are not satisfied. Hardware design and set-up, as well as careful selection of which MR sequences are run with the EEG hardware present must therefore be considered. The above issues highlight the importance of the choice of the experimental protocol employed when performing a simultaneous EEG-fMRI experiment. Based on previous research we describe an optimal experimental set-up. This provides high quality EEG data during simultaneous fMRI when using commercial EEG and fMRI systems, with safety risks to the subject minimized. We demonstrate this set-up in an EEG-fMRI experiment using a simple visual stimulus. However, much more complex stimuli can be used. Here we show the EEG-fMRI set-up using a Brain Products GmbH (Gilching, Germany) MRplus, 32 channel EEG system in conjunction with a Philips Achieva (Best, Netherlands) 3T MR scanner, although many of the techniques are transferable to other systems.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Biophysics, Medicine, Neuroimaging, Functional Neuroimaging, Investigative Techniques, neurosciences, EEG, functional magnetic resonance imaging, fMRI, magnetic resonance imaging, MRI, simultaneous, recording, imaging, clinical techniques
50283
Play Button
RNA-seq Analysis of Transcriptomes in Thrombin-treated and Control Human Pulmonary Microvascular Endothelial Cells
Authors: Dilyara Cheranova, Margaret Gibson, Suman Chaudhary, Li Qin Zhang, Daniel P. Heruth, Dmitry N. Grigoryev, Shui Qing Ye.
Institutions: Children's Mercy Hospital and Clinics, School of Medicine, University of Missouri-Kansas City.
The characterization of gene expression in cells via measurement of mRNA levels is a useful tool in determining how the transcriptional machinery of the cell is affected by external signals (e.g. drug treatment), or how cells differ between a healthy state and a diseased state. With the advent and continuous refinement of next-generation DNA sequencing technology, RNA-sequencing (RNA-seq) has become an increasingly popular method of transcriptome analysis to catalog all species of transcripts, to determine the transcriptional structure of all expressed genes and to quantify the changing expression levels of the total set of transcripts in a given cell, tissue or organism1,2 . RNA-seq is gradually replacing DNA microarrays as a preferred method for transcriptome analysis because it has the advantages of profiling a complete transcriptome, providing a digital type datum (copy number of any transcript) and not relying on any known genomic sequence3. Here, we present a complete and detailed protocol to apply RNA-seq to profile transcriptomes in human pulmonary microvascular endothelial cells with or without thrombin treatment. This protocol is based on our recent published study entitled "RNA-seq Reveals Novel Transcriptome of Genes and Their Isoforms in Human Pulmonary Microvascular Endothelial Cells Treated with Thrombin,"4 in which we successfully performed the first complete transcriptome analysis of human pulmonary microvascular endothelial cells treated with thrombin using RNA-seq. It yielded unprecedented resources for further experimentation to gain insights into molecular mechanisms underlying thrombin-mediated endothelial dysfunction in the pathogenesis of inflammatory conditions, cancer, diabetes, and coronary heart disease, and provides potential new leads for therapeutic targets to those diseases. The descriptive text of this protocol is divided into four parts. The first part describes the treatment of human pulmonary microvascular endothelial cells with thrombin and RNA isolation, quality analysis and quantification. The second part describes library construction and sequencing. The third part describes the data analysis. The fourth part describes an RT-PCR validation assay. Representative results of several key steps are displayed. Useful tips or precautions to boost success in key steps are provided in the Discussion section. Although this protocol uses human pulmonary microvascular endothelial cells treated with thrombin, it can be generalized to profile transcriptomes in both mammalian and non-mammalian cells and in tissues treated with different stimuli or inhibitors, or to compare transcriptomes in cells or tissues between a healthy state and a disease state.
Genetics, Issue 72, Molecular Biology, Immunology, Medicine, Genomics, Proteins, RNA-seq, Next Generation DNA Sequencing, Transcriptome, Transcription, Thrombin, Endothelial cells, high-throughput, DNA, genomic DNA, RT-PCR, PCR
4393
Play Button
Nanomanipulation of Single RNA Molecules by Optical Tweezers
Authors: William Stephenson, Gorby Wan, Scott A. Tenenbaum, Pan T. X. Li.
Institutions: University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York.
A large portion of the human genome is transcribed but not translated. In this post genomic era, regulatory functions of RNA have been shown to be increasingly important. As RNA function often depends on its ability to adopt alternative structures, it is difficult to predict RNA three-dimensional structures directly from sequence. Single-molecule approaches show potentials to solve the problem of RNA structural polymorphism by monitoring molecular structures one molecule at a time. This work presents a method to precisely manipulate the folding and structure of single RNA molecules using optical tweezers. First, methods to synthesize molecules suitable for single-molecule mechanical work are described. Next, various calibration procedures to ensure the proper operations of the optical tweezers are discussed. Next, various experiments are explained. To demonstrate the utility of the technique, results of mechanically unfolding RNA hairpins and a single RNA kissing complex are used as evidence. In these examples, the nanomanipulation technique was used to study folding of each structural domain, including secondary and tertiary, independently. Lastly, the limitations and future applications of the method are discussed.
Bioengineering, Issue 90, RNA folding, single-molecule, optical tweezers, nanomanipulation, RNA secondary structure, RNA tertiary structure
51542
Play Button
Proteomic Sample Preparation from Formalin Fixed and Paraffin Embedded Tissue
Authors: Jacek R. Wiśniewski.
Institutions: Max Planck Institute of Biochemistry.
Preserved clinical material is a unique source for proteomic investigation of human disorders. Here we describe an optimized protocol allowing large scale quantitative analysis of formalin fixed and paraffin embedded (FFPE) tissue. The procedure comprises four distinct steps. The first one is the preparation of sections from the FFPE material and microdissection of cells of interest. In the second step the isolated cells are lysed and processed using 'filter aided sample preparation' (FASP) technique. In this step, proteins are depleted from reagents used for the sample lysis and are digested in two-steps using endoproteinase LysC and trypsin. After each digestion, the peptides are collected in separate fractions and their content is determined using a highly sensitive fluorescence measurement. Finally, the peptides are fractionated on 'pipette-tip' microcolumns. The LysC-peptides are separated into 4 fractions whereas the tryptic peptides are separated into 2 fractions. In this way prepared samples allow analysis of proteomes from minute amounts of material to a depth of 10,000 proteins. Thus, the described workflow is a powerful technique for studying diseases in a system-wide-fashion as well as for identification of potential biomarkers and drug targets.
Chemistry, Issue 79, Clinical Chemistry Tests, Proteomics, Proteomics, Proteomics, analytical chemistry, Formalin fixed and paraffin embedded (FFPE), sample preparation, proteomics, filter aided sample preparation (FASP), clinical proteomics; microdissection, SAX-fractionation
50589
Play Button
Glycopeptide Capture for Cell Surface Proteomics
Authors: M. C. Gilbert Lee, Bingyun Sun.
Institutions: Simon Fraser University.
Cell surface proteins, including extracellular matrix proteins, participate in all major cellular processes and functions, such as growth, differentiation, and proliferation. A comprehensive characterization of these proteins provides rich information for biomarker discovery, cell-type identification, and drug-target selection, as well as helping to advance our understanding of cellular biology and physiology. Surface proteins, however, pose significant analytical challenges, because of their inherently low abundance, high hydrophobicity, and heavy post-translational modifications. Taking advantage of the prevalent glycosylation on surface proteins, we introduce here a high-throughput glycopeptide-capture approach that integrates the advantages of several existing N-glycoproteomics means. Our method can enrich the glycopeptides derived from surface proteins and remove their glycans for facile proteomics using LC-MS. The resolved N-glycoproteome comprises the information of protein identity and quantity as well as their sites of glycosylation. This method has been applied to a series of studies in areas including cancer, stem cells, and drug toxicity. The limitation of the method lies in the low abundance of surface membrane proteins, such that a relatively large quantity of samples is required for this analysis compared to studies centered on cytosolic proteins.
Molecular Biology, Issue 87, membrane protein, N-linked glycoprotein, post-translational modification, mass spectrometry, HPLC, hydrazide chemistry, N-glycoproteomics, glycopeptide capture
51349
Play Button
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Authors: Francesco Vallania, Enrique Ramos, Sharon Cresci, Robi D. Mitra, Todd E. Druley.
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators. To address this need, we have developed a pooled sequencing approach1,9 and a novel software package1 for highly accurate rare variant detection from the resulting data. The ability to pool genomes from entire populations of affected individuals and survey the degree of genetic variation at multiple targeted regions in a single sequencing library provides excellent cost and time savings to traditional single-sample sequencing methodology. With a mean sequencing coverage per allele of 25-fold, our custom algorithm, SPLINTER, uses an internal variant calling control strategy to call insertions, deletions and substitutions up to four base pairs in length with high sensitivity and specificity from pools of up to 1 mutant allele in 500 individuals. Here we describe the method for preparing the pooled sequencing library followed by step-by-step instructions on how to use the SPLINTER package for pooled sequencing analysis (http://www.ibridgenetwork.org/wustl/splinter). We show a comparison between pooled sequencing of 947 individuals, all of whom also underwent genome-wide array, at over 20kb of sequencing per person. Concordance between genotyping of tagged and novel variants called in the pooled sample were excellent. This method can be easily scaled up to any number of genomic loci and any number of individuals. By incorporating the internal positive and negative amplicon controls at ratios that mimic the population under study, the algorithm can be calibrated for optimal performance. This strategy can also be modified for use with hybridization capture or individual-specific barcodes and can be applied to the sequencing of naturally heterogeneous samples, such as tumor DNA.
Genetics, Issue 64, Genomics, Cancer Biology, Bioinformatics, Pooled DNA sequencing, SPLINTER, rare genetic variants, genetic screening, phenotype, high throughput, computational analysis, DNA, PCR, primers
3943
Play Button
Metabolic Labeling and Membrane Fractionation for Comparative Proteomic Analysis of Arabidopsis thaliana Suspension Cell Cultures
Authors: Witold G. Szymanski, Sylwia Kierszniowska, Waltraud X. Schulze.
Institutions: Max Plank Institute of Molecular Plant Physiology, University of Hohenheim.
Plasma membrane microdomains are features based on the physical properties of the lipid and sterol environment and have particular roles in signaling processes. Extracting sterol-enriched membrane microdomains from plant cells for proteomic analysis is a difficult task mainly due to multiple preparation steps and sources for contaminations from other cellular compartments. The plasma membrane constitutes only about 5-20% of all the membranes in a plant cell, and therefore isolation of highly purified plasma membrane fraction is challenging. A frequently used method involves aqueous two-phase partitioning in polyethylene glycol and dextran, which yields plasma membrane vesicles with a purity of 95% 1. Sterol-rich membrane microdomains within the plasma membrane are insoluble upon treatment with cold nonionic detergents at alkaline pH. This detergent-resistant membrane fraction can be separated from the bulk plasma membrane by ultracentrifugation in a sucrose gradient 2. Subsequently, proteins can be extracted from the low density band of the sucrose gradient by methanol/chloroform precipitation. Extracted protein will then be trypsin digested, desalted and finally analyzed by LC-MS/MS. Our extraction protocol for sterol-rich microdomains is optimized for the preparation of clean detergent-resistant membrane fractions from Arabidopsis thaliana cell cultures. We use full metabolic labeling of Arabidopsis thaliana suspension cell cultures with K15NO3 as the only nitrogen source for quantitative comparative proteomic studies following biological treatment of interest 3. By mixing equal ratios of labeled and unlabeled cell cultures for joint protein extraction the influence of preparation steps on final quantitative result is kept at a minimum. Also loss of material during extraction will affect both control and treatment samples in the same way, and therefore the ratio of light and heave peptide will remain constant. In the proposed method either labeled or unlabeled cell culture undergoes a biological treatment, while the other serves as control 4.
Empty Value, Issue 79, Cellular Structures, Plants, Genetically Modified, Arabidopsis, Membrane Lipids, Intracellular Signaling Peptides and Proteins, Membrane Proteins, Isotope Labeling, Proteomics, plants, Arabidopsis thaliana, metabolic labeling, stable isotope labeling, suspension cell cultures, plasma membrane fractionation, two phase system, detergent resistant membranes (DRM), mass spectrometry, membrane microdomains, quantitative proteomics
50535
Play Button
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Authors: Monica Soldi, Tiziana Bonaldi.
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
51220
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry
Authors: Kerstin Trompelt, Janina Steinbeck, Mia Terashima, Michael Hippler.
Institutions: University of Münster, Carnegie Institution for Science.
The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii, isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14N/15N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g. used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions.
Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids
51103
Play Button
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Authors: Nikolai Hentze, Matthias P. Mayer.
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1H/2H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g. crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.   
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
50839
Play Button
Quantitative Analysis of Chromatin Proteomes in Disease
Authors: Emma Monte, Haodong Chen, Maria Kolmakova, Michelle Parvatiyar, Thomas M. Vondriska, Sarah Franklin.
Institutions: David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, Nora Eccles Harrison Cardiovascular Research and Training Institute, University of Utah.
In the nucleus reside the proteomes whose functions are most intimately linked with gene regulation. Adult mammalian cardiomyocyte nuclei are unique due to the high percentage of binucleated cells,1 the predominantly heterochromatic state of the DNA, and the non-dividing nature of the cardiomyocyte which renders adult nuclei in a permanent state of interphase.2 Transcriptional regulation during development and disease have been well studied in this organ,3-5 but what remains relatively unexplored is the role played by the nuclear proteins responsible for DNA packaging and expression, and how these proteins control changes in transcriptional programs that occur during disease.6 In the developed world, heart disease is the number one cause of mortality for both men and women.7 Insight on how nuclear proteins cooperate to regulate the progression of this disease is critical for advancing the current treatment options. Mass spectrometry is the ideal tool for addressing these questions as it allows for an unbiased annotation of the nuclear proteome and relative quantification for how the abundance of these proteins changes with disease. While there have been several proteomic studies for mammalian nuclear protein complexes,8-13 until recently14 there has been only one study examining the cardiac nuclear proteome, and it considered the entire nucleus, rather than exploring the proteome at the level of nuclear sub compartments.15 In large part, this shortage of work is due to the difficulty of isolating cardiac nuclei. Cardiac nuclei occur within a rigid and dense actin-myosin apparatus to which they are connected via multiple extensions from the endoplasmic reticulum, to the extent that myocyte contraction alters their overall shape.16 Additionally, cardiomyocytes are 40% mitochondria by volume17 which necessitates enrichment of the nucleus apart from the other organelles. Here we describe a protocol for cardiac nuclear enrichment and further fractionation into biologically-relevant compartments. Furthermore, we detail methods for label-free quantitative mass spectrometric dissection of these fractions-techniques amenable to in vivo experimentation in various animal models and organ systems where metabolic labeling is not feasible.
Medicine, Issue 70, Molecular Biology, Immunology, Genetics, Genomics, Physiology, Protein, DNA, Chromatin, cardiovascular disease, proteomics, mass spectrometry
4294
Play Button
High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry
Authors: Subarna Bhattacharya, Paul W. Burridge, Erin M. Kropp, Sandra L. Chuppa, Wai-Meng Kwok, Joseph C. Wu, Kenneth R. Boheler, Rebekah L. Gundry.
Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin.
There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described.
Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a
52010
Play Button
Multi-step Preparation Technique to Recover Multiple Metabolite Compound Classes for In-depth and Informative Metabolomic Analysis
Authors: Charmion Cruickshank-Quinn, Kevin D. Quinn, Roger Powell, Yanhui Yang, Michael Armstrong, Spencer Mahaffey, Richard Reisdorph, Nichole Reisdorph.
Institutions: National Jewish Health, University of Colorado Denver.
Metabolomics is an emerging field which enables profiling of samples from living organisms in order to obtain insight into biological processes. A vital aspect of metabolomics is sample preparation whereby inconsistent techniques generate unreliable results. This technique encompasses protein precipitation, liquid-liquid extraction, and solid-phase extraction as a means of fractionating metabolites into four distinct classes. Improved enrichment of low abundance molecules with a resulting increase in sensitivity is obtained, and ultimately results in more confident identification of molecules. This technique has been applied to plasma, bronchoalveolar lavage fluid, and cerebrospinal fluid samples with volumes as low as 50 µl.  Samples can be used for multiple downstream applications; for example, the pellet resulting from protein precipitation can be stored for later analysis. The supernatant from that step undergoes liquid-liquid extraction using water and strong organic solvent to separate the hydrophilic and hydrophobic compounds. Once fractionated, the hydrophilic layer can be processed for later analysis or discarded if not needed. The hydrophobic fraction is further treated with a series of solvents during three solid-phase extraction steps to separate it into fatty acids, neutral lipids, and phospholipids. This allows the technician the flexibility to choose which class of compounds is preferred for analysis. It also aids in more reliable metabolite identification since some knowledge of chemical class exists.
Bioengineering, Issue 89, plasma, chemistry techniques, analytical, solid phase extraction, mass spectrometry, metabolomics, fluids and secretions, profiling, small molecules, lipids, liquid chromatography, liquid-liquid extraction, cerebrospinal fluid, bronchoalveolar lavage fluid
51670
Play Button
Identification of Protein Interaction Partners in Mammalian Cells Using SILAC-immunoprecipitation Quantitative Proteomics
Authors: Edward Emmott, Ian Goodfellow.
Institutions: University of Cambridge.
Quantitative proteomics combined with immuno-affinity purification, SILAC immunoprecipitation, represent a powerful means for the discovery of novel protein:protein interactions. By allowing the accurate relative quantification of protein abundance in both control and test samples, true interactions may be easily distinguished from experimental contaminants. Low affinity interactions can be preserved through the use of less-stringent buffer conditions and remain readily identifiable. This protocol discusses the labeling of tissue culture cells with stable isotope labeled amino acids, transfection and immunoprecipitation of an affinity tagged protein of interest, followed by the preparation for submission to a mass spectrometry facility. This protocol then discusses how to analyze and interpret the data returned from the mass spectrometer in order to identify cellular partners interacting with a protein of interest. As an example this technique is applied to identify proteins binding to the eukaryotic translation initiation factors: eIF4AI and eIF4AII.
Biochemistry, Issue 89, mass spectrometry, tissue culture techniques, isotope labeling, SILAC, Stable Isotope Labeling of Amino Acids in Cell Culture, proteomics, Interactomics, immunoprecipitation, pulldown, eIF4A, GFP, nanotrap, orbitrap
51656
Play Button
Quantification of Proteins Using Peptide Immunoaffinity Enrichment Coupled with Mass Spectrometry
Authors: Lei Zhao, Jeffrey R. Whiteaker, Matthew E. Pope, Eric Kuhn, Angela Jackson, N. Leigh Anderson, Terry W. Pearson, Steven A. Carr, Amanda G. Paulovich.
Institutions: Fred Hutchinson Cancer Research Center - FHCRC, University of Victoria, Broad Institute of MIT and Harvard, University of Victoria, Plasma Proteome Institute.
There is a great need for quantitative assays in measuring proteins. Traditional sandwich immunoassays, largely considered the gold standard in quantitation, are associated with a high cost, long lead time, and are fraught with drawbacks (e.g. heterophilic antibodies, autoantibody interference, 'hook-effect').1 An alternative technique is affinity enrichment of peptides coupled with quantitative mass spectrometry, commonly referred to as SISCAPA (Stable Isotope Standards and Capture by Anti-Peptide Antibodies).2 In this technique, affinity enrichment of peptides with stable isotope dilution and detection by selected/multiple reaction monitoring mass spectrometry (SRM/MRM-MS) provides quantitative measurement of peptides as surrogates for their respective proteins. SRM/MRM-MS is well established for accurate quantitation of small molecules 3, 4 and more recently has been adapted to measure the concentrations of proteins in plasma and cell lysates.5-7 To achieve quantitation of proteins, these larger molecules are digested to component peptides using an enzyme such as trypsin. One or more selected peptides whose sequence is unique to the target protein in that species (i.e. "proteotypic" peptides) are then enriched from the sample using anti-peptide antibodies and measured as quantitative stoichiometric surrogates for protein concentration in the sample. Hence, coupled to stable isotope dilution (SID) methods (i.e. a spiked-in stable isotope labeled peptide standard), SRM/MRM can be used to measure concentrations of proteotypic peptides as surrogates for quantification of proteins in complex biological matrices. The assays have several advantages compared to traditional immunoassays. The reagents are relatively less expensive to generate, the specificity for the analyte is excellent, the assays can be highly multiplexed, enrichment can be performed from neat plasma (no depletion required), and the technique is amenable to a wide array of proteins or modifications of interest.8-13 In this video we demonstrate the basic protocol as adapted to a magnetic bead platform.
Molecular Biology, Issue 53, Mass spectrometry, targeted assay, peptide, MRM, SISCAPA, protein quantitation
2812
Play Button
Linearization of the Bradford Protein Assay
Authors: Orna Ernst, Tsaffrir Zor.
Institutions: Tel Aviv University.
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Cellular Biology, Issue 38, Bradford, protein assay, protein quantification, Coomassie brilliant blue
1918
Play Button
MALDI Sample Preparation: the Ultra Thin Layer Method
Authors: David Fenyo, Qingjun Wang, Jeffrey A. DeGrasse, Julio C. Padovan, Martine Cadene, Brian T. Chait.
Institutions: Rockefeller University.
This video demonstrates the preparation of an ultra-thin matrix/analyte layer for analyzing peptides and proteins by Matrix-Assisted Laser Desorption Ionization Mass Spectrometry (MALDI-MS) 1,2. The ultra-thin layer method involves the production of a substrate layer of matrix crystals (alpha-cyano-4-hydroxycinnamic acid) on the sample plate, which serves as a seeding ground for subsequent crystallization of a matrix/analyte mixture. Advantages of the ultra-thin layer method over other sample deposition approaches (e.g. dried droplet) are that it provides (i) greater tolerance to impurities such as salts and detergents, (ii) better resolution, and (iii) higher spatial uniformity. This method is especially useful for the accurate mass determination of proteins. The protocol was initially developed and optimized for the analysis of membrane proteins and used to successfully analyze ion channels, metabolite transporters, and receptors, containing between 2 and 12 transmembrane domains 2. Since the original publication, it has also shown to be equally useful for the analysis of soluble proteins. Indeed, we have used it for a large number of proteins having a wide range of properties, including those with molecular masses as high as 380 kDa 3. It is currently our method of choice for the molecular mass analysis of all proteins. The described procedure consistently produces high-quality spectra, and it is sensitive, robust, and easy to implement.
Cellular Biology, Issue 3, mass-spectrometry, ultra-thin layer, MALDI, MS, proteins
192
Play Button
Electrolytic Inferior Vena Cava Model (EIM) of Venous Thrombosis
Authors: Jose A. Diaz, Shirley K. Wrobleski, Angela E. Hawley, Benedict R. Lucchesi, Thomas W. Wakefield, Daniel D. Myers, Jr..
Institutions: University of Michigan , University of Michigan.
Animal models serve a vital role in deep venous thrombosis (DVT) research in order to study thrombus formation, thrombus resolution and to test potential therapeutic compounds (1). New compounds to be utilized in the treatment and prevention of DVT are currently being developed. The delivery of potential therapeutic antagonist compounds to an affected thrombosed vein has been problematic. In the context of therapeutic applications, a model that uses partial stasis and consistently generates thrombi within a major vein has been recently established. The Electrolytic Inferior vena cava Model (EIM) is mouse model of DVT that permits thrombus formation in the presence of continuous blood flow. This model allows therapeutic agents to be in contact with the thrombus in a dynamic fashion, and is more sensitive than other models of DVT (1). In addition, this thrombosis model closely simulates clinical situations of thrombus formation and is ideal to study venous endothelial cell activation, leukocyte migration, venous thrombogenesis, and to test therapeutic applications (1). The EIM model is technically simple, easily reproducible, creates consistent thrombi sizes and allows for a large sample (i.e. thrombus and vein wall) which is required for analytical purposes.
Medicine, Issue 53, Endothelial dysfunction, Thrombosis, Electrolytic injury, Inflammation, Animal model
2737
Play Button
Quantifying Agonist Activity at G Protein-coupled Receptors
Authors: Frederick J. Ehlert, Hinako Suga, Michael T. Griffin.
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors. Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb) is much greater than that for the inactive state (Ka). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε) (Figure 2). Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs and relative efficacy of an agonist 1,2. In this report, we show how to modify this analysis to estimate the agonist Kb value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb in absolute units of M-1. Our method of analyzing agonist concentration-response curves 3,4 consists of global nonlinear regression using the operational model 5. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs and a parameter proportional to efficacy (τ). The estimate of τKobs of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys). In this case, the Kb value of an agonist is equivalent to τKobssys 3. Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
3179
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.