JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
On the encoding of proteins for disordered regions prediction.
PUBLISHED: 01-01-2013
Disordered regions, i.e., regions of proteins that do not adopt a stable three-dimensional structure, have been shown to play various and critical roles in many biological processes. Predicting and understanding their formation is therefore a key sub-problem of protein structure and function inference. A wide range of machine learning approaches have been developed to automatically predict disordered regions of proteins. One key factor of the success of these methods is the way in which protein information is encoded into features. Recently, we have proposed a systematic methodology to study the relevance of various feature encodings in the context of disulfide connectivity pattern prediction. In the present paper, we adapt this methodology to the problem of predicting disordered regions and assess it on proteins from the 10th CASP competition, as well as on a very large subset of proteins extracted from PDB. Our results, obtained with ensembles of extremely randomized trees, highlight a novel feature function encoding the proximity of residues according to their accessibility to the solvent, which is playing the second most important role in the prediction of disordered regions, just after evolutionary information. Furthermore, even though our approach treats each residue independently, our results are very competitive in terms of accuracy with respect to the state-of-the-art. A web-application is available at
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Published: 11-03-2011
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
19 Related JoVE Articles!
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Using Microwave and Macroscopic Samples of Dielectric Solids to Study the Photonic Properties of Disordered Photonic Bandgap Materials
Authors: Seyed Reza Hashemizad, Sam Tsitrin, Polin Yadak, Yingquan He, Daniel Cuneo, Eric Paul Williamson, Devin Liner, Weining Man.
Institutions: San Francisco State University.
Recently, disordered photonic materials have been suggested as an alternative to periodic crystals for the formation of a complete photonic bandgap (PBG). In this article we will describe the methods for constructing and characterizing macroscopic disordered photonic structures using microwaves. The microwave regime offers the most convenient experimental sample size to build and test PBG media. Easily manipulated dielectric lattice components extend flexibility in building various 2D structures on top of pre-printed plastic templates. Once built, the structures could be quickly modified with point and line defects to make freeform waveguides and filters. Testing is done using a widely available Vector Network Analyzer and pairs of microwave horn antennas. Due to the scale invariance property of electromagnetic fields, the results we obtained in the microwave region can be directly applied to infrared and optical regions. Our approach is simple but delivers exciting new insight into the nature of light and disordered matter interaction. Our representative results include the first experimental demonstration of the existence of a complete and isotropic PBG in a two-dimensional (2D) hyperuniform disordered dielectric structure. Additionally we demonstrate experimentally the ability of this novel photonic structure to guide electromagnetic waves (EM) through freeform waveguides of arbitrary shape.
Physics, Issue 91, optics and photonics, photonic crystals, photonic bandgap, hyperuniform, disordered media, waveguides
Play Button
Modeling The Lifecycle Of Ebola Virus Under Biosafety Level 2 Conditions With Virus-like Particles Containing Tetracistronic Minigenomes
Authors: Thomas Hoenen, Ari Watt, Anita Mora, Heinz Feldmann.
Institutions: National Institute of Allergy and Infectious Diseases, National Institutes of Health, National Institute of Allergy and Infectious Diseases, National Institutes of Health.
Ebola viruses cause severe hemorrhagic fevers in humans and non-human primates, with case fatality rates as high as 90%. There are no approved vaccines or specific treatments for the disease caused by these viruses, and work with infectious Ebola viruses is restricted to biosafety level 4 laboratories, significantly limiting the research on these viruses. Lifecycle modeling systems model the virus lifecycle under biosafety level 2 conditions; however, until recently such systems have been limited to either individual aspects of the virus lifecycle, or a single infectious cycle. Tetracistronic minigenomes, which consist of Ebola virus non-coding regions, a reporter gene, and three Ebola virus genes involved in morphogenesis, budding, and entry (VP40, GP1,2, and VP24), can be used to produce replication and transcription-competent virus-like particles (trVLPs) containing these minigenomes. These trVLPs can continuously infect cells expressing the Ebola virus proteins responsible for genome replication and transcription, allowing us to safely model multiple infectious cycles under biosafety level 2 conditions. Importantly, the viral components of this systems are solely derived from Ebola virus and not from other viruses (as is, for example, the case in systems using pseudotyped viruses), and VP40, GP1,2 and VP24 are not overexpressed in this system, making it ideally suited for studying morphogenesis, budding and entry, although other aspects of the virus lifecycle such as genome replication and transcription can also be modeled with this system. Therefore, the tetracistronic trVLP assay represents the most comprehensive lifecycle modeling system available for Ebola viruses, and has tremendous potential for use in investigating the biology of Ebola viruses in future. Here, we provide detailed information on the use of this system, as well as on expected results.
Infectious Diseases, Issue 91, hemorrhagic Fevers, Viral, Mononegavirales Infections, Ebola virus, filovirus, lifecycle modeling system, minigenome, reverse genetics, virus-like particles, replication, transcription, budding, morphogenesis, entry
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Authors: Monica Soldi, Tiziana Bonaldi.
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Play Button
Reporter-based Growth Assay for Systematic Analysis of Protein Degradation
Authors: Itamar Cohen, Yifat Geffen, Guy Ravid, Tommer Ravid.
Institutions: The Hebrew University of Jerusalem.
Protein degradation by the ubiquitin-proteasome system (UPS) is a major regulatory mechanism for protein homeostasis in all eukaryotes. The standard approach to determining intracellular protein degradation relies on biochemical assays for following the kinetics of protein decline. Such methods are often laborious and time consuming and therefore not amenable to experiments aimed at assessing multiple substrates and degradation conditions. As an alternative, cell growth-based assays have been developed, that are, in their conventional format, end-point assays that cannot quantitatively determine relative changes in protein levels. Here we describe a method that faithfully determines changes in protein degradation rates by coupling them to yeast cell-growth kinetics. The method is based on an established selection system where uracil auxotrophy of URA3-deleted yeast cells is rescued by an exogenously expressed reporter protein, comprised of a fusion between the essential URA3 gene and a degradation determinant (degron). The reporter protein is designed so that its synthesis rate is constant whilst its degradation rate is determined by the degron. As cell growth in uracil-deficient medium is proportional to the relative levels of Ura3, growth kinetics are entirely dependent on the reporter protein degradation. This method accurately measures changes in intracellular protein degradation kinetics. It was applied to: (a) Assessing the relative contribution of known ubiquitin-conjugating factors to proteolysis (b) E2 conjugating enzyme structure-function analyses (c) Identification and characterization of novel degrons. Application of the degron-URA3-based system transcends the protein degradation field, as it can also be adapted to monitoring changes of protein levels associated with functions of other cellular pathways.
Cellular Biology, Issue 93, Protein Degradation, Ubiquitin, Proteasome, Baker's Yeast, Growth kinetics, Doubling time
Play Button
Identifying Protein-protein Interaction Sites Using Peptide Arrays
Authors: Hadar Amartely, Anat Iosub-Amir, Assaf Friedler.
Institutions: The Hebrew University of Jerusalem.
Protein-protein interactions mediate most of the processes in the living cell and control homeostasis of the organism. Impaired protein interactions may result in disease, making protein interactions important drug targets. It is thus highly important to understand these interactions at the molecular level. Protein interactions are studied using a variety of techniques ranging from cellular and biochemical assays to quantitative biophysical assays, and these may be performed either with full-length proteins, with protein domains or with peptides. Peptides serve as excellent tools to study protein interactions since peptides can be easily synthesized and allow the focusing on specific interaction sites. Peptide arrays enable the identification of the interaction sites between two proteins as well as screening for peptides that bind the target protein for therapeutic purposes. They also allow high throughput SAR studies. For identification of binding sites, a typical peptide array usually contains partly overlapping 10-20 residues peptides derived from the full sequences of one or more partner proteins of the desired target protein. Screening the array for binding the target protein reveals the binding peptides, corresponding to the binding sites in the partner proteins, in an easy and fast method using only small amount of protein. In this article we describe a protocol for screening peptide arrays for mapping the interaction sites between a target protein and its partners. The peptide array is designed based on the sequences of the partner proteins taking into account their secondary structures. The arrays used in this protocol were Celluspots arrays prepared by INTAVIS Bioanalytical Instruments. The array is blocked to prevent unspecific binding and then incubated with the studied protein. Detection using an antibody reveals the binding peptides corresponding to the specific interaction sites between the proteins.
Molecular Biology, Issue 93, peptides, peptide arrays, protein-protein interactions, binding sites, peptide synthesis, micro-arrays
Play Button
Fabrication and Characterization of Disordered Polymer Optical Fibers for Transverse Anderson Localization of Light
Authors: Salman Karbasi, Ryan J. Frazier, Craig R. Mirr, Karl W. Koch, Arash Mafi.
Institutions: University of Wisconsin-Milwaukee, Corning Incorporated, Corning, New York.
We develop and characterize a disordered polymer optical fiber that uses transverse Anderson localization as a novel waveguiding mechanism. The developed polymer optical fiber is composed of 80,000 strands of poly (methyl methacrylate) (PMMA) and polystyrene (PS) that are randomly mixed and drawn into a square cross section optical fiber with a side width of 250 μm. Initially, each strand is 200 μm in diameter and 8-inches long. During the mixing process of the original fiber strands, the fibers cross over each other; however, a large draw ratio guarantees that the refractive index profile is invariant along the length of the fiber for several tens of centimeters. The large refractive index difference of 0.1 between the disordered sites results in a small localized beam radius that is comparable to the beam radius of conventional optical fibers. The input light is launched from a standard single mode optical fiber using the butt-coupling method and the near-field output beam from the disordered fiber is imaged using a 40X objective and a CCD camera. The output beam diameter agrees well with the expected results from the numerical simulations. The disordered optical fiber presented in this work is the first device-level implementation of 2D Anderson localization, and can potentially be used for image transport and short-haul optical communication systems.
Physics, Issue 77, Chemistry, Optics, Physics (General), Transverse Anderson Localization, Polymer Optical Fibers, Scattering, Random Media, Optical Fiber Materials, electromagnetism, optical fibers, optical materials, optical waveguides, photonics, wave propagation (optics), fiber optics
Play Button
Characterization of Electrode Materials for Lithium Ion and Sodium Ion Batteries Using Synchrotron Radiation Techniques
Authors: Marca M. Doeff, Guoying Chen, Jordi Cabana, Thomas J. Richardson, Apurva Mehta, Mona Shirpour, Hugues Duncan, Chunjoong Kim, Kinson C. Kam, Thomas Conry.
Institutions: Lawrence Berkeley National Laboratory, University of Illinois at Chicago, Stanford Synchrotron Radiation Lightsource, Haldor Topsøe A/S, PolyPlus Battery Company.
Intercalation compounds such as transition metal oxides or phosphates are the most commonly used electrode materials in Li-ion and Na-ion batteries. During insertion or removal of alkali metal ions, the redox states of transition metals in the compounds change and structural transformations such as phase transitions and/or lattice parameter increases or decreases occur. These behaviors in turn determine important characteristics of the batteries such as the potential profiles, rate capabilities, and cycle lives. The extremely bright and tunable x-rays produced by synchrotron radiation allow rapid acquisition of high-resolution data that provide information about these processes. Transformations in the bulk materials, such as phase transitions, can be directly observed using X-ray diffraction (XRD), while X-ray absorption spectroscopy (XAS) gives information about the local electronic and geometric structures (e.g. changes in redox states and bond lengths). In situ experiments carried out on operating cells are particularly useful because they allow direct correlation between the electrochemical and structural properties of the materials. These experiments are time-consuming and can be challenging to design due to the reactivity and air-sensitivity of the alkali metal anodes used in the half-cell configurations, and/or the possibility of signal interference from other cell components and hardware. For these reasons, it is appropriate to carry out ex situ experiments (e.g. on electrodes harvested from partially charged or cycled cells) in some cases. Here, we present detailed protocols for the preparation of both ex situ and in situ samples for experiments involving synchrotron radiation and demonstrate how these experiments are done.
Physics, Issue 81, X-Ray Absorption Spectroscopy, X-Ray Diffraction, inorganic chemistry, electric batteries (applications), energy storage, Electrode materials, Li-ion battery, Na-ion battery, X-ray Absorption Spectroscopy (XAS), in situ X-ray diffraction (XRD)
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Avidity-based Extracellular Interaction Screening (AVEXIS) for the Scalable Detection of Low-affinity Extracellular Receptor-Ligand Interactions
Authors: Jason S. Kerr, Gavin J. Wright.
Institutions: Wellcome Trust Sanger Institute.
Extracellular protein:protein interactions between secreted or membrane-tethered proteins are critical for both initiating intercellular communication and ensuring cohesion within multicellular organisms. Proteins predicted to form extracellular interactions are encoded by approximately a quarter of human genes1, but despite their importance and abundance, the majority of these proteins have no documented binding partner. Primarily, this is due to their biochemical intractability: membrane-embedded proteins are difficult to solubilise in their native conformation and contain structurally-important posttranslational modifications. Also, the interaction affinities between receptor proteins are often characterised by extremely low interaction strengths (half-lives < 1 second) precluding their detection with many commonly-used high throughput methods2. Here, we describe an assay, AVEXIS (AVidity-based EXtracellular Interaction Screen) that overcomes these technical challenges enabling the detection of very weak protein interactions (t1/2 ≤ 0.1 sec) with a low false positive rate3. The assay is usually implemented in a high throughput format to enable the systematic screening of many thousands of interactions in a convenient microtitre plate format (Fig. 1). It relies on the production of soluble recombinant protein libraries that contain the ectodomain fragments of cell surface receptors or secreted proteins within which to screen for interactions; therefore, this approach is suitable for type I, type II, GPI-linked cell surface receptors and secreted proteins but not for multipass membrane proteins such as ion channels or transporters. The recombinant protein libraries are produced using a convenient and high-level mammalian expression system4, to ensure that important posttranslational modifications such as glycosylation and disulphide bonds are added. Expressed recombinant proteins are secreted into the medium and produced in two forms: a biotinylated bait which can be captured on a streptavidin-coated solid phase suitable for screening, and a pentamerised enzyme-tagged (β-lactamase) prey. The bait and prey proteins are presented to each other in a binary fashion to detect direct interactions between them, similar to a conventional ELISA (Fig. 1). The pentamerisation of the proteins in the prey is achieved through a peptide sequence from the cartilage oligomeric matrix protein (COMP) and increases the local concentration of the ectodomains thereby providing significant avidity gains to enable even very transient interactions to be detected. By normalising the activities of both the bait and prey to predetermined levels prior to screening, we have shown that interactions having monomeric half-lives of 0.1 sec can be detected with low false positive rates3.
Molecular Biology, Issue 61, Receptor-ligand pairs, Extracellular protein interactions, AVEXIS, Adhesion receptors, Transient/weak interactions, High throughput screening
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Play Button
Analyzing and Building Nucleic Acid Structures with 3DNA
Authors: Andrew V. Colasanti, Xiang-Jun Lu, Wilma K. Olson.
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
Interview: Protein Folding and Studies of Neurodegenerative Diseases
Authors: Susan Lindquist.
Institutions: MIT - Massachusetts Institute of Technology.
In this interview, Dr. Lindquist describes relationships between protein folding, prion diseases and neurodegenerative disorders. The problem of the protein folding is at the core of the modern biology. In addition to their traditional biochemical functions, proteins can mediate transfer of biological information and therefore can be considered a genetic material. This recently discovered function of proteins has important implications for studies of human disorders. Dr. Lindquist also describes current experimental approaches to investigate the mechanism of neurodegenerative diseases based on genetic studies in model organisms.
Neuroscience, issue 17, protein folding, brain, neuron, prion, neurodegenerative disease, yeast, screen, Translational Research
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.