JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Exploiting a reduced set of weighted average features to improve prediction of DNA-binding residues from 3D structures.
PUBLISHED: 09-22-2011
Predicting DNA-binding residues from a protein three-dimensional structure is a key task of computational structural proteomics. In the present study, based on machine learning technology, we aim to explore a reduced set of weighted average features for improving prediction of DNA-binding residues on protein surfaces. Via constructing the spatial environment around a DNA-binding residue, a novel weighting factor is first proposed to quantify the distance-dependent contribution of each neighboring residue in determining the location of a binding residue. Then, a weighted average scheme is introduced to represent the surface patch of the considering residue. Finally, the classifier is trained on the reduced set of these weighted average features, consisting of evolutionary profile, interface propensity, betweenness centrality and solvent surface area of side chain. Experimental results on 5-fold cross validation and independent tests indicate that the new feature set are effective to describe DNA-binding residues and our approach has significantly better performance than two previous methods. Furthermore, a brief case study suggests that the weighted average features are powerful for identifying DNA-binding residues and are promising for further study of protein structure-function relationship. The source code and datasets are available upon request.
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Published: 07-25-2013
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
20 Related JoVE Articles!
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Play Button
Streamlined Purification of Plasmid DNA From Prokaryotic Cultures
Authors: Laura Pueschel, Hongshan Li, Matthew Hymes.
Institutions: Pall Life Sciences .
We describe the complete process of AcroPrep Advance Filter Plates for 96 plasmid preparations, starting from prokaryotic culture and ending with high purity DNA. Based on multi-well filtration for bacterial lysate clearance and DNA purification, this method creates a streamlined process for plasmid preparation. Filter plates containing silica-based media can easily be processed by vacuum filtration or centrifuge to yield appreciable quantities of plasmid DNA. Quantitative analyses determine the purified plasmid DNA is consistently of high quality with average OD260/280 ratios of 1.97. Overall, plasmid yields offer more pure DNA for downstream applications, such as sequencing and cloning. This streamlined method of using AcroPrep Advance Filter Plates allows for manual, semi-automated or fully-automated processing.
Molecular Biology, Issue 47, Plasmid purification, High-throughput, miniprep, filter plates
Play Button
Using an EEG-Based Brain-Computer Interface for Virtual Cursor Movement with BCI2000
Authors: J. Adam Wilson, Gerwin Schalk, Léo M. Walton, Justin C. Williams.
Institutions: University of Wisconsin-Madison, New York State Dept. of Health.
A brain-computer interface (BCI) functions by translating a neural signal, such as the electroencephalogram (EEG), into a signal that can be used to control a computer or other device. The amplitude of the EEG signals in selected frequency bins are measured and translated into a device command, in this case the horizontal and vertical velocity of a computer cursor. First, the EEG electrodes are applied to the user s scalp using a cap to record brain activity. Next, a calibration procedure is used to find the EEG electrodes and features that the user will learn to voluntarily modulate to use the BCI. In humans, the power in the mu (8-12 Hz) and beta (18-28 Hz) frequency bands decrease in amplitude during a real or imagined movement. These changes can be detected in the EEG in real-time, and used to control a BCI ([1],[2]). Therefore, during a screening test, the user is asked to make several different imagined movements with their hands and feet to determine the unique EEG features that change with the imagined movements. The results from this calibration will show the best channels to use, which are configured so that amplitude changes in the mu and beta frequency bands move the cursor either horizontally or vertically. In this experiment, the general purpose BCI system BCI2000 is used to control signal acquisition, signal processing, and feedback to the user [3].
Neuroscience, Issue 29, BCI, EEG, brain-computer interface, BCI2000
Play Button
Transmembrane Domain Oligomerization Propensity determined by ToxR Assay
Authors: Catherine Joce, Alyssa Wiener, Hang Yin.
Institutions: University of Colorado at Boulder.
The oversimplified view of protein transmembrane domains as merely anchors in phospholipid bilayers has long since been disproven. In many cases membrane-spanning proteins have evolved highly sophisticated mechanisms of action.1-3 One way in which membrane proteins can modulate their structures and functions is by direct and specific contact of hydrophobic helices, forming structured transmembrane oligomers.4,5 Much recent work has focused on the distribution of amino acids preferentially found in the membrane environment in comparison to aqueous solution and the different intermolecular forces that drive protein association.6,7 Nevertheless, studies of molecular recognition at the transmembrane domain of proteins still lags behind those of water-soluble regions. A major hurdle remains: despite the remarkable specificity and affinity that transmembrane oligomerization can achieve,8 direct measurement of their association is challenging. Traditional methodologies applied to the study of integral membrane protein function can be hampered by the inherent insolubility of the sequences under examination. Biophysical insights gained from studying synthetic peptides representing transmembrane domains can provide useful structural insight. However, the biological relevance of the detergent micellar or liposome systems used in these studies to mimic cellular membranes is often questioned; do peptides adopt a native-like structure under these conditions and does their functional behaviour truly reflect the mode of action within a native membrane? In order to study the interactions of transmembrane sequences in natural phospholipid bilayers, the Langosch lab developed ToxR transcriptional reporter assays.9 The transmembrane domain of interest is expressed as a chimeric protein with maltose binding protein for location to the periplasm and ToxR to provide a report of the level of oligomerization (Figure 1). In the last decade, several other groups (e.g. Engelman, DeGrado, Shai) further optimized and applied this ToxR reporter assay.10-13 The various ToxR assays have become a gold standard to test protein-protein interactions in cell membranes. We herein demonstrate a typical experimental operation conducted in our laboratory that primarily follows protocols developed by Langosch. This generally applicable method is useful for the analysis of transmembrane domain self-association in E. coli, where β-galactosidase production is used to assess the TMD oligomerization propensity. Upon TMD-induced dimerization, ToxR binds to the ctx promoter causing up-regulation of the LacZ gene for β-galactosidase. A colorimetric readout is obtained by addition of ONPG to lyzed cells. Hydrolytic cleavage of ONPG by β-galactosidase results in the production of the light absorbing species o-nitrophenolate (ONP) (Figure 2).
Cellular Biology, Issue 51, Transmembrane domain, oligomerization, transcriptional reporter, ToxR, latent membrane protein-1
Play Button
Examining the Conformational Dynamics of Membrane Proteins in situ with Site-directed Fluorescence Labeling
Authors: Ryan Richards, Robert E. Dempski.
Institutions: Worcester Polytechnic Institute.
Two electrode voltage clamp electrophysiology (TEVC) is a powerful tool to investigate the mechanism of ion transport1 for a wide variety of membrane proteins including ion channels2, ion pumps3, and transporters4. Recent developments have combined site-specific fluorophore labeling alongside TEVC to concurrently examine the conformational dynamics at specific residues and function of these proteins on the surface of single cells. We will describe a method to study the conformational dynamics of membrane proteins by simultaneously monitoring fluorescence and current changes using voltage-clamp fluorometry. This approach can be used to examine the molecular motion of membrane proteins site-specifically following cysteine replacement and site-directed fluorophore labeling5,6. Furthermore, this method provides an approach to determine distance constraints between specific residues7,8. This is achieved by selectively attaching donor and acceptor fluorophores to two mutated cysteine residues of interest. In brief, these experiments are performed following functional expression of the desired protein on the surface of Xenopus leavis oocytes. The large surface area of these oocytes enables facile functional measurements and a robust fluorescence signal5. It is also possible to readily change the extracellular conditions such as pH, ligand or cations/anions, which can provide further information on the mechanism of membrane proteins4. Finally, recent developments have also enabled the manipulation of select internal ions following co-expression with a second protein9. Our protocol is described in multiple parts. First, cysteine scanning mutagenesis proceeded by fluorophore labeling is completed at residues located at the interface of the transmembrane and extracellular domains. Subsequent experiments are designed to identify residues which demonstrate large changes in fluorescence intensity (<5%)3 upon a conformational change of the protein. Second, these changes in fluorescence intensity are compared to the kinetic parameters of the membrane protein in order to correlate the conformational dynamics to the function of the protein10. This enables a rigorous biophysical analysis of the molecular motion of the target protein. Lastly, two residues of the holoenzyme can be labeled with a donor and acceptor fluorophore in order to determine distance constraints using donor photodestruction methods. It is also possible to monitor the relative movement of protein subunits following labeling with a donor and acceptor fluorophore.
Cellular Biology, Issue 51, membrane protein, two electrode voltage-clamp, biophysics, site-specific fluorophore labeling, microscopy, conformational dynamics
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
DNA-affinity-purified Chip (DAP-chip) Method to Determine Gene Targets for Bacterial Two component Regulatory Systems
Authors: Lara Rajeev, Eric G. Luning, Aindrila Mukhopadhyay.
Institutions: Lawrence Berkeley National Laboratory.
In vivo methods such as ChIP-chip are well-established techniques used to determine global gene targets for transcription factors. However, they are of limited use in exploring bacterial two component regulatory systems with uncharacterized activation conditions. Such systems regulate transcription only when activated in the presence of unique signals. Since these signals are often unknown, the in vitro microarray based method described in this video article can be used to determine gene targets and binding sites for response regulators. This DNA-affinity-purified-chip method may be used for any purified regulator in any organism with a sequenced genome. The protocol involves allowing the purified tagged protein to bind to sheared genomic DNA and then affinity purifying the protein-bound DNA, followed by fluorescent labeling of the DNA and hybridization to a custom tiling array. Preceding steps that may be used to optimize the assay for specific regulators are also described. The peaks generated by the array data analysis are used to predict binding site motifs, which are then experimentally validated. The motif predictions can be further used to determine gene targets of orthologous response regulators in closely related species. We demonstrate the applicability of this method by determining the gene targets and binding site motifs and thus predicting the function for a sigma54-dependent response regulator DVU3023 in the environmental bacterium Desulfovibrio vulgaris Hildenborough.
Genetics, Issue 89, DNA-Affinity-Purified-chip, response regulator, transcription factor binding site, two component system, signal transduction, Desulfovibrio, lactate utilization regulator, ChIP-chip
Play Button
RNA Secondary Structure Prediction Using High-throughput SHAPE
Authors: Sabrina Lusvarghi, Joanna Sztuba-Solinska, Katarzyna J. Purzycka, Jason W. Rausch, Stuart F.J. Le Grice.
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
Play Button
Visualization of ATP Synthase Dimers in Mitochondria by Electron Cryo-tomography
Authors: Karen M. Davies, Bertram Daum, Vicki A. M. Gold, Alexander W. Mühleip, Tobias Brandt, Thorsten B. Blum, Deryck J. Mills, Werner Kühlbrandt.
Institutions: Max Planck Institute of Biophysics.
Electron cryo-tomography is a powerful tool in structural biology, capable of visualizing the three-dimensional structure of biological samples, such as cells, organelles, membrane vesicles, or viruses at molecular detail. To achieve this, the aqueous sample is rapidly vitrified in liquid ethane, which preserves it in a close-to-native, frozen-hydrated state. In the electron microscope, tilt series are recorded at liquid nitrogen temperature, from which 3D tomograms are reconstructed. The signal-to-noise ratio of the tomographic volume is inherently low. Recognizable, recurring features are enhanced by subtomogram averaging, by which individual subvolumes are cut out, aligned and averaged to reduce noise. In this way, 3D maps with a resolution of 2 nm or better can be obtained. A fit of available high-resolution structures to the 3D volume then produces atomic models of protein complexes in their native environment. Here we show how we use electron cryo-tomography to study the in situ organization of large membrane protein complexes in mitochondria. We find that ATP synthases are organized in rows of dimers along highly curved apices of the inner membrane cristae, whereas complex I is randomly distributed in the membrane regions on either side of the rows. By subtomogram averaging we obtained a structure of the mitochondrial ATP synthase dimer within the cristae membrane.
Structural Biology, Issue 91, electron microscopy, electron cryo-tomography, mitochondria, ultrastructure, membrane structure, membrane protein complexes, ATP synthase, energy conversion, bioenergetics
Play Button
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Authors: Nikolai Hentze, Matthias P. Mayer.
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1H/2H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g. crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.   
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
Play Button
Structure and Coordination Determination of Peptide-metal Complexes Using 1D and 2D 1H NMR
Authors: Michal S. Shoshan, Edit Y. Tshuva, Deborah E. Shalev.
Institutions: The Hebrew University of Jerusalem, The Hebrew University of Jerusalem.
Copper (I) binding by metallochaperone transport proteins prevents copper oxidation and release of the toxic ions that may participate in harmful redox reactions. The Cu (I) complex of the peptide model of a Cu (I) binding metallochaperone protein, which includes the sequence MTCSGCSRPG (underlined is conserved), was determined in solution under inert conditions by NMR spectroscopy. NMR is a widely accepted technique for the determination of solution structures of proteins and peptides. Due to difficulty in crystallization to provide single crystals suitable for X-ray crystallography, the NMR technique is extremely valuable, especially as it provides information on the solution state rather than the solid state. Herein we describe all steps that are required for full three-dimensional structure determinations by NMR. The protocol includes sample preparation in an NMR tube, 1D and 2D data collection and processing, peak assignment and integration, molecular mechanics calculations, and structure analysis. Importantly, the analysis was first conducted without any preset metal-ligand bonds, to assure a reliable structure determination in an unbiased manner.
Chemistry, Issue 82, solution structure determination, NMR, peptide models, copper-binding proteins, copper complexes
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Authors: Monica Soldi, Tiziana Bonaldi.
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
Play Button
Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer
Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher.
Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour.
Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.
Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Principles of Site-Specific Recombinase (SSR) Technology
Authors: Frank Bucholtz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Site-specific recombinase (SSR) technology allows the manipulation of gene structure to explore gene function and has become an integral tool of molecular biology. Site-specific recombinases are proteins that bind to distinct DNA target sequences. The Cre/lox system was first described in bacteriophages during the 1980's. Cre recombinase is a Type I topoisomerase that catalyzes site-specific recombination of DNA between two loxP (locus of X-over P1) sites. The Cre/lox system does not require any cofactors. LoxP sequences contain distinct binding sites for Cre recombinases that surround a directional core sequence where recombination and rearrangement takes place. When cells contain loxP sites and express the Cre recombinase, a recombination event occurs. Double-stranded DNA is cut at both loxP sites by the Cre recombinase, rearranged, and ligated ("scissors and glue"). Products of the recombination event depend on the relative orientation of the asymmetric sequences. SSR technology is frequently used as a tool to explore gene function. Here the gene of interest is flanked with Cre target sites loxP ("floxed"). Animals are then crossed with animals expressing the Cre recombinase under the control of a tissue-specific promoter. In tissues that express the Cre recombinase it binds to target sequences and excises the floxed gene. Controlled gene deletion allows the investigation of gene function in specific tissues and at distinct time points. Analysis of gene function employing SSR technology --- conditional mutagenesis -- has significant advantages over traditional knock-outs where gene deletion is frequently lethal.
Cellular Biology, Issue 15, Molecular Biology, Site-Specific Recombinase, Cre recombinase, Cre/lox system, transgenic animals, transgenic technology
Play Button
Concentration Determination of Nucleic Acids and Proteins Using the Micro-volume Bio-spec Nano Spectrophotometer
Authors: Suja Sukumaran.
Institutions: Scientific Instruments.
Nucleic Acid quantitation procedures have advanced significantly in the last three decades. More and more, molecular biologists require consistent small-volume analysis of nucleic acid samples for their experiments. The BioSpec-nano provides a potential solution to the problems of inaccurate, non-reproducible results, inherent in current DNA quantitation methods, via specialized optics and a sensitive PDA detector. The BioSpec-nano also has automated functionality such that mounting, measurement, and cleaning are done by the instrument, thereby eliminating tedious, repetitive, and inconsistent placement of the fiber optic element and manual cleaning. In this study, data is presented on the quantification of DNA and protein, as well as on measurement reproducibility and accuracy. Automated sample contact and rapid scanning allows measurement in three seconds, resulting in excellent throughput. Data analysis is carried out using the built-in features of the software. The formula used for calculating DNA concentration is: Sample Concentration = DF · (OD260-OD320)· NACF (1) Where DF = sample dilution factor and NACF = nucleic acid concentration factor. The Nucleic Acid concentration factor is set in accordance with the analyte selected1. Protein concentration results can be expressed as μg/ mL or as moles/L by entering e280 and molecular weight values respectively. When residue values for Tyr, Trp and Cysteine (S-S bond) are entered in the e280Calc tab, the extinction coefficient values are calculated as e280 = 5500 x (Trp residues) + 1490 x (Tyr residues) + 125 x (cysteine S-S bond). The e280 value is used by the software for concentration calculation. In addition to concentration determination of nucleic acids and protein, the BioSpec-nano can be used as an ultra micro-volume spectrophotometer for many other analytes or as a standard spectrophotometer using 5 mm pathlength cells.
Molecular Biology, Issue 48, Nucleic acid quantitation, protein quantitation, micro-volume analysis, label quantitation
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.