JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Optimal ligand descriptor for pocket recognition based on the Beta-shape.
.
PLoS ONE
PUBLISHED: 04-03-2015
Structure-based virtual screening is one of the most important and common computational methods for the identification of predicted hit at the beginning of drug discovery. Pocket recognition and definition is frequently a prerequisite of structure-based virtual screening, reducing the search space of the predicted protein-ligand complex. In this paper, we present an optimal ligand shape descriptor for a pocket recognition algorithm based on the beta-shape, which is a derivative structure of the Voronoi diagram of atoms. We investigate six candidates for a shape descriptor for a ligand using statistical analysis: the minimum enclosing sphere, three measures from the principal component analysis of atoms, the van der Waals volume, and the beta-shape volume. Among them, the van der Waals volume of a ligand is the optimal shape descriptor for pocket recognition and best tunes the pocket recognition algorithm based on the beta-shape for efficient virtual screening. The performance of the proposed algorithm is verified by a benchmark test.
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Published: 07-25-2013
ABSTRACT
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
22 Related JoVE Articles!
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Genetically-encoded Molecular Probes to Study G Protein-coupled Receptors
Authors: Saranga Naganathan, Amy Grunbeck, He Tian, Thomas Huber, Thomas P. Sakmar.
Institutions: The Rockefeller University.
To facilitate structural and dynamic studies of G protein-coupled receptor (GPCR) signaling complexes, new approaches are required to introduce informative probes or labels into expressed receptors that do not perturb receptor function. We used amber codon suppression technology to genetically-encode the unnatural amino acid, p-azido-L-phenylalanine (azF) at various targeted positions in GPCRs heterologously expressed in mammalian cells. The versatility of the azido group is illustrated here in different applications to study GPCRs in their native cellular environment or under detergent solubilized conditions. First, we demonstrate a cell-based targeted photocrosslinking technology to identify the residues in the ligand-binding pocket of GPCR where a tritium-labeled small-molecule ligand is crosslinked to a genetically-encoded azido amino acid. We then demonstrate site-specific modification of GPCRs by the bioorthogonal Staudinger-Bertozzi ligation reaction that targets the azido group using phosphine derivatives. We discuss a general strategy for targeted peptide-epitope tagging of expressed membrane proteins in-culture and its detection using a whole-cell-based ELISA approach. Finally, we show that azF-GPCRs can be selectively tagged with fluorescent probes. The methodologies discussed are general, in that they can in principle be applied to any amino acid position in any expressed GPCR to interrogate active signaling complexes.
Genetics, Issue 79, Receptors, G-Protein-Coupled, Protein Engineering, Signal Transduction, Biochemistry, Unnatural amino acid, site-directed mutagenesis, G protein-coupled receptor, targeted photocrosslinking, bioorthogonal labeling, targeted epitope tagging
50588
Play Button
Visualizing Neuroblast Cytokinesis During C. elegans Embryogenesis
Authors: Denise Wernike, Chloe van Oostende, Alisa Piekny.
Institutions: Concordia University.
This protocol describes the use of fluorescence microscopy to image dividing cells within developing Caenorhabditis elegans embryos. In particular, this protocol focuses on how to image dividing neuroblasts, which are found underneath the epidermal cells and may be important for epidermal morphogenesis. Tissue formation is crucial for metazoan development and relies on external cues from neighboring tissues. C. elegans is an excellent model organism to study tissue morphogenesis in vivo due to its transparency and simple organization, making its tissues easy to study via microscopy. Ventral enclosure is the process where the ventral surface of the embryo is covered by a single layer of epithelial cells. This event is thought to be facilitated by the underlying neuroblasts, which provide chemical guidance cues to mediate migration of the overlying epithelial cells. However, the neuroblasts are highly proliferative and also may act as a mechanical substrate for the ventral epidermal cells. Studies using this experimental protocol could uncover the importance of intercellular communication during tissue formation, and could be used to reveal the roles of genes involved in cell division within developing tissues.
Neuroscience, Issue 85, C. elegans, morphogenesis, cytokinesis, neuroblasts, anillin, microscopy, cell division
51188
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Inhibitory Synapse Formation in a Co-culture Model Incorporating GABAergic Medium Spiny Neurons and HEK293 Cells Stably Expressing GABAA Receptors
Authors: Laura E. Brown, Celine Fuchs, Martin W. Nicholson, F. Anne Stephenson, Alex M. Thomson, Jasmina N. Jovanovic.
Institutions: University College London.
Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA receptors (GABAARs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials. During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAARs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other. To elucidate the underlying molecular mechanisms, a novel in vitro co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAAR subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAAR subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro model system can be used to reproduce, at least in part, the in vivo conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAARs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts.
Neuroscience, Issue 93, Developmental neuroscience, synaptogenesis, synaptic inhibition, co-culture, stable cell lines, GABAergic, medium spiny neurons, HEK 293 cell line
52115
Play Button
Generation of Human Alloantigen-specific T Cells from Peripheral Blood
Authors: Burhan P Jama, Gerald P Morris.
Institutions: University of California, San Diego.
The study of human T lymphocyte biology often involves examination of responses to activating ligands. T cells recognize and respond to processed peptide antigens presented by MHC (human ortholog HLA) molecules through the T cell receptor (TCR) in a highly sensitive and specific manner. While the primary function of T cells is to mediate protective immune responses to foreign antigens presented by self-MHC, T cells respond robustly to antigenic differences in allogeneic tissues. T cell responses to alloantigens can be described as either direct or indirect alloreactivity. In alloreactivity, the T cell responds through highly specific recognition of both the presented peptide and the MHC molecule. The robust oligoclonal response of T cells to allogeneic stimulation reflects the large number of potentially stimulatory alloantigens present in allogeneic tissues. While the breadth of alloreactive T cell responses is an important factor in initiating and mediating the pathology associated with biologically-relevant alloreactive responses such as graft versus host disease and allograft rejection, it can preclude analysis of T cell responses to allogeneic ligands. To this end, this protocol describes a method for generating alloreactive T cells from naive human peripheral blood leukocytes (PBL) that respond to known peptide-MHC (pMHC) alloantigens. The protocol applies pMHC multimer labeling, magnetic bead enrichment and flow cytometry to single cell in vitro culture methods for the generation of alloantigen-specific T cell clones. This enables studies of the biochemistry and function of T cells responding to allogeneic stimulation.
Immunology, Issue 93, T cell, immunology, human cell culture, transplantation, flow cytometry, alloreactivity
52257
Play Button
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Authors: Sandra Zehentmeier, Zoltan Cseresnyes, Juan Escribano Navarro, Raluca A. Niesner, Anja E. Hauser.
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
52544
Play Button
Surface Enhanced Raman Spectroscopy Detection of Biomolecules Using EBL Fabricated Nanostructured Substrates
Authors: Robert F. Peters, Luis Gutierrez-Rivera, Steven K. Dew, Maria Stepanova.
Institutions: University of Alberta, National Research Council of Canada.
Fabrication and characterization of conjugate nano-biological systems interfacing metallic nanostructures on solid supports with immobilized biomolecules is reported. The entire sequence of relevant experimental steps is described, involving the fabrication of nanostructured substrates using electron beam lithography, immobilization of biomolecules on the substrates, and their characterization utilizing surface-enhanced Raman spectroscopy (SERS). Three different designs of nano-biological systems are employed, including protein A, glucose binding protein, and a dopamine binding DNA aptamer. In the latter two cases, the binding of respective ligands, D-glucose and dopamine, is also included. The three kinds of biomolecules are immobilized on nanostructured substrates by different methods, and the results of SERS imaging are reported. The capabilities of SERS to detect vibrational modes from surface-immobilized proteins, as well as to capture the protein-ligand and aptamer-ligand binding are demonstrated. The results also illustrate the influence of the surface nanostructure geometry, biomolecules immobilization strategy, Raman activity of the molecules and presence or absence of the ligand binding on the SERS spectra acquired.
Engineering, Issue 97, Bio-functionalized surfaces, proteins, aptamers, molecular recognition, nanostructures, electron beam lithography, surface-enhanced Raman spectroscopy.
52712
Play Button
In situ Quantification of Pancreatic Beta-cell Mass in Mice
Authors: Abraham Kim, German Kilimnik, Manami Hara.
Institutions: University of Chicago.
Tracing changes of specific cell populations in health and disease is an important goal of biomedical research. The process of monitoring pancreatic beta-cell proliferation and islet growth is particularly challenging. We have developed a method to capture the distribution of beta-cells in the intact pancreas of transgenic mice with fluorescence-tagged beta-cells with a macro written for ImageJ (rsb.info.nih.gov/ij/). Following pancreatic dissection and tissue clearing, the entire pancreas is captured as a virtual slice, after which the GFP-tagged beta-cells are examined. The analysis includes the quantification of total beta-cell area, islet number and size distribution with reference to specific parameters and locations for each islet and for small clusters of beta-cells. The entire distribution of islets can be plotted in three dimensions, and the information from the distribution on the size and shape of each islet allows a quantitative and qualitative comparison of changes in overall beta-cell area at a glance.
Cellular Biology, Issue 40, beta-cells, islets, mouse, pancreas
1970
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Using Learning Outcome Measures to assess Doctoral Nursing Education
Authors: Glenn H. Raup, Jeff King, Romana J. Hughes, Natasha Faidley.
Institutions: Harris College of Nursing and Health Sciences, Texas Christian University.
Education programs at all levels must be able to demonstrate successful program outcomes. Grades alone do not represent a comprehensive measurement methodology for assessing student learning outcomes at either the course or program level. The development and application of assessment rubrics provides an unequivocal measurement methodology to ensure a quality learning experience by providing a foundation for improvement based on qualitative and quantitatively measurable, aggregate course and program outcomes. Learning outcomes are the embodiment of the total learning experience and should incorporate assessment of both qualitative and quantitative program outcomes. The assessment of qualitative measures represents a challenge for educators in any level of a learning program. Nursing provides a unique challenge and opportunity as it is the application of science through the art of caring. Quantification of desired student learning outcomes may be enhanced through the development of assessment rubrics designed to measure quantitative and qualitative aspects of the nursing education and learning process. They provide a mechanism for uniform assessment by nursing faculty of concepts and constructs that are otherwise difficult to describe and measure. A protocol is presented and applied to a doctoral nursing education program with recommendations for application and transformation of the assessment rubric to other education programs. Through application of these specially designed rubrics, all aspects of an education program can be adequately assessed to provide information for program assessment that facilitates the closure of the gap between desired and actual student learning outcomes for any desired educational competency.
Medicine, Issue 40, learning, outcomes, measurement, program, assessment, rubric
2048
Play Button
Assessment of Immunologically Relevant Dynamic Tertiary Structural Features of the HIV-1 V3 Loop Crown R2 Sequence by ab initio Folding
Authors: David Almond, Timothy Cardozo.
Institutions: School of Medicine, New York University.
The antigenic diversity of HIV-1 has long been an obstacle to vaccine design, and this variability is especially pronounced in the V3 loop of the virus' surface envelope glycoprotein. We previously proposed that the crown of the V3 loop, although dynamic and sequence variable, is constrained throughout the population of HIV-1 viruses to an immunologically relevant β-hairpin tertiary structure. Importantly, there are thousands of different V3 loop crown sequences in circulating HIV-1 viruses, making 3D structural characterization of trends across the diversity of viruses difficult or impossible by crystallography or NMR. Our previous successful studies with folding of the V3 crown1, 2 used the ab initio algorithm 3 accessible in the ICM-Pro molecular modeling software package (Molsoft LLC, La Jolla, CA) and suggested that the crown of the V3 loop, specifically from positions 10 to 22, benefits sufficiently from the flexibility and length of its flanking stems to behave to a large degree as if it were an unconstrained peptide freely folding in solution. As such, rapid ab initio folding of just this portion of the V3 loop of any individual strain of the 60,000+ circulating HIV-1 strains can be informative. Here, we folded the V3 loop of the R2 strain to gain insight into the structural basis of its unique properties. R2 bears a rare V3 loop sequence thought to be responsible for the exquisite sensitivity of this strain to neutralization by patient sera and monoclonal antibodies4, 5. The strain mediates CD4-independent infection and appears to elicit broadly neutralizing antibodies. We demonstrate how evaluation of the results of the folding can be informative for associating observed structures in the folding with the immunological activities observed for R2.
Infection, Issue 43, HIV-1, structure-activity relationships, ab initio simulations, antibody-mediated neutralization, vaccine design
2118
Play Button
Collecting Variable-concentration Isothermal Titration Calorimetry Datasets in Order to Determine Binding Mechanisms
Authors: Lee A. Freiburger, Anthony K. Mittermaier, Karine Auclair.
Institutions: McGill University.
Isothermal titration calorimetry (ITC) is commonly used to determine the thermodynamic parameters associated with the binding of a ligand to a host macromolecule. ITC has some advantages over common spectroscopic approaches for studying host/ligand interactions. For example, the heat released or absorbed when the two components interact is directly measured and does not require any exogenous reporters. Thus the binding enthalpy and the association constant (Ka) are directly obtained from ITC data, and can be used to compute the entropic contribution. Moreover, the shape of the isotherm is dependent on the c-value and the mechanistic model involved. The c-value is defined as c = n[P]tKa, where [P]t is the protein concentration, and n is the number of ligand binding sites within the host. In many cases, multiple binding sites for a given ligand are non-equivalent and ITC allows the characterization of the thermodynamic binding parameters for each individual binding site. This however requires that the correct binding model be used. This choice can be problematic if different models can fit the same experimental data. We have previously shown that this problem can be circumvented by performing experiments at several c-values. The multiple isotherms obtained at different c-values are fit simultaneously to separate models. The correct model is next identified based on the goodness of fit across the entire variable-c dataset. This process is applied here to the aminoglycoside resistance-causing enzyme aminoglycoside N-6'-acetyltransferase-Ii (AAC(6')-Ii). Although our methodology is applicable to any system, the necessity of this strategy is better demonstrated with a macromolecule-ligand system showing allostery or cooperativity, and when different binding models provide essentially identical fits to the same data. To our knowledge, there are no such systems commercially available. AAC(6')-Ii, is a homo-dimer containing two active sites, showing cooperativity between the two subunits. However ITC data obtained at a single c-value can be fit equally well to at least two different models a two-sets-of-sites independent model and a two-site sequential (cooperative) model. Through varying the c-value as explained above, it was established that the correct binding model for AAC(6')-Ii is a two-site sequential binding model. Herein, we describe the steps that must be taken when performing ITC experiments in order to obtain datasets suitable for variable-c analyses.
Biochemistry, Issue 50, ITC, global fitting, cooperativity, binding model, ligand
2529
Play Button
A Mouse Model of the Cornea Pocket Assay for Angiogenesis Study
Authors: Zhongshu Tang, Fan Zhang, Yang Li, Pachiappan Arjunan, Anil Kumar, Chunsik Lee, Xuri Li.
Institutions: National Eye Institute.
A normal cornea is clear of vascular tissues. However, blood vessels can be induced to grow and survive in the cornea when potent angiogenic factors are administered 1. This uniqueness has made the cornea pocket assay one of the most used models for angiogenesis studies. The cornea composes multiple layers of cells. It is therefore possible to embed a pellet containing the angiogenic factor of interest in the cornea to investigate its angiogenic effect 2,3. Here, we provide a step by step demonstration of how to (I) produce the angiogenic factor-containing pellet (II) embed the pellet into the cornea (III) analyze the angiogenesis induced by the angiogenic factor of interest. Since the basic fibroblast growth factor (bFGF) is known as one of the most potent angiogenic factors 4, it is used here to induce angiogenesis in the cornea.
Medicine, Issue 54, mouse cornea pocket assay, angiogenesis
3077
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
3358
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
An Analytical Tool that Quantifies Cellular Morphology Changes from Three-dimensional Fluorescence Images
Authors: Carolina L. Haass-Koffler, Mohammad Naeemuddin, Selena E. Bartlett.
Institutions: University of California, San Francisco , University of California, San Francisco , Queensland University of Technology, Brisbane, Australia.
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology1 even in complex tissue sections2. Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells3, however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Cellular Biology, Issue 66, 3-dimensional, microscopy, quantification, morphometric, single-cell, cell dynamics
4233
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Fluorescence Biomembrane Force Probe: Concurrent Quantitation of Receptor-ligand Kinetics and Binding-induced Intracellular Signaling on a Single Cell
Authors: Yunfeng Chen, Baoyu Liu, Lining Ju, Jinsung Hong, Qinghua Ji, Wei Chen, Cheng Zhu.
Institutions: Georgia Institute of Technology, Georgia Institute of Technology, The University of Sydney, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Zhejiang University.
Membrane receptor-ligand interactions mediate many cellular functions. Binding kinetics and downstream signaling triggered by these molecular interactions are likely affected by the mechanical environment in which binding and signaling take place. A recent study demonstrated that mechanical force can regulate antigen recognition by and triggering of the T-cell receptor (TCR). This was made possible by a new technology we developed and termed fluorescence biomembrane force probe (fBFP), which combines single-molecule force spectroscopy with fluorescence microscopy. Using an ultra-soft human red blood cell as the sensitive force sensor, a high-speed camera and real-time imaging tracking techniques, the fBFP is of ~1 pN (10-12 N), ~3 nm and ~0.5 msec in force, spatial and temporal resolution. With the fBFP, one can precisely measure single receptor-ligand binding kinetics under force regulation and simultaneously image binding-triggered intracellular calcium signaling on a single live cell. This new technology can be used to study other membrane receptor-ligand interaction and signaling in other cells under mechanical regulation.
Bioengineering, Issue 102, single cell, single molecule, receptor-ligand binding, kinetics, fluorescence and force spectroscopy, adhesion, mechano-transduction, calcium
52975
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.