JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.
.
PLoS ONE
PUBLISHED: 03-24-2015
Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Published: 07-25-2013
ABSTRACT
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
24 Related JoVE Articles!
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
50975
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Transcript and Metabolite Profiling for the Evaluation of Tobacco Tree and Poplar as Feedstock for the Bio-based Industry
Authors: Colin Ruprecht, Takayuki Tohge, Alisdair Fernie, Cara L. Mortimer, Amanda Kozlo, Paul D. Fraser, Norma Funke, Igor Cesarino, Ruben Vanholme, Wout Boerjan, Kris Morreel, Ingo Burgert, Notburga Gierlinger, Vincent Bulone, Vera Schneider, Andrea Stockero, Juan Navarro-Aviñó, Frank Pudel, Bart Tambuyser, James Hygate, Jon Bumstead, Louis Notley, Staffan Persson.
Institutions: Max Planck Institute for Molecular Plant Physiology, Royal Holloway, University of London, VIB, UGhent, ETH Zurich, EMPA, Royal Institute of Technology (KTH), European Research and Project Office GmbH, ABBA Gaia S.L., Pflanzenöltechnologie, Capax Environmental Services, Green Fuels, Neutral Consulting Ltd, University of Melbourne.
The global demand for food, feed, energy, and water poses extraordinary challenges for future generations. It is evident that robust platforms for the exploration of renewable resources are necessary to overcome these challenges. Within the multinational framework MultiBioPro we are developing biorefinery pipelines to maximize the use of plant biomass. More specifically, we use poplar and tobacco tree (Nicotiana glauca) as target crop species for improving saccharification, isoprenoid, long chain hydrocarbon contents, fiber quality, and suberin and lignin contents. The methods used to obtain these outputs include GC-MS, LC-MS and RNA sequencing platforms. The metabolite pipelines are well established tools to generate these types of data, but also have the limitations in that only well characterized metabolites can be used. The deep sequencing will allow us to include all transcripts present during the developmental stages of the tobacco tree leaf, but has to be mapped back to the sequence of Nicotiana tabacum. With these set-ups, we aim at a basic understanding for underlying processes and at establishing an industrial framework to exploit the outcomes. In a more long term perspective, we believe that data generated here will provide means for a sustainable biorefinery process using poplar and tobacco tree as raw material. To date the basal level of metabolites in the samples have been analyzed and the protocols utilized are provided in this article.
Environmental Sciences, Issue 87, botany, plants, Biorefining, Poplar, Tobacco tree, Arabidopsis, suberin, lignin, cell walls, biomass, long-chain hydrocarbons, isoprenoids, Nicotiana glauca, systems biology
51393
Play Button
Scalable High Throughput Selection From Phage-displayed Synthetic Antibody Libraries
Authors: Shane Miersch, Zhijian Li, Rachel Hanna, Megan E. McLaughlin, Michael Hornsby, Tet Matsuguchi, Marcin Paduch, Annika Sääf, Jim Wells, Shohei Koide, Anthony Kossiakoff, Sachdev S. Sidhu.
Institutions: The Recombinant Antibody Network, University of Toronto, University of California, San Francisco at Mission Bay, The University of Chicago.
The demand for antibodies that fulfill the needs of both basic and clinical research applications is high and will dramatically increase in the future. However, it is apparent that traditional monoclonal technologies are not alone up to this task. This has led to the development of alternate methods to satisfy the demand for high quality and renewable affinity reagents to all accessible elements of the proteome. Toward this end, high throughput methods for conducting selections from phage-displayed synthetic antibody libraries have been devised for applications involving diverse antigens and optimized for rapid throughput and success. Herein, a protocol is described in detail that illustrates with video demonstration the parallel selection of Fab-phage clones from high diversity libraries against hundreds of targets using either a manual 96 channel liquid handler or automated robotics system. Using this protocol, a single user can generate hundreds of antigens, select antibodies to them in parallel and validate antibody binding within 6-8 weeks. Highlighted are: i) a viable antigen format, ii) pre-selection antigen characterization, iii) critical steps that influence the selection of specific and high affinity clones, and iv) ways of monitoring selection effectiveness and early stage antibody clone characterization. With this approach, we have obtained synthetic antibody fragments (Fabs) to many target classes including single-pass membrane receptors, secreted protein hormones, and multi-domain intracellular proteins. These fragments are readily converted to full-length antibodies and have been validated to exhibit high affinity and specificity. Further, they have been demonstrated to be functional in a variety of standard immunoassays including Western blotting, ELISA, cellular immunofluorescence, immunoprecipitation and related assays. This methodology will accelerate antibody discovery and ultimately bring us closer to realizing the goal of generating renewable, high quality antibodies to the proteome.
Immunology, Issue 95, Bacteria, Viruses, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Life Sciences (General), phage display, synthetic antibodies, high throughput, antibody selection, scalable methodology
51492
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Workflow for High-content, Individual Cell Quantification of Fluorescent Markers from Universal Microscope Data, Supported by Open Source Software
Authors: Simon R. Stockwell, Sibylle Mittnacht.
Institutions: UCL Cancer Institute.
Advances in understanding the control mechanisms governing the behavior of cells in adherent mammalian tissue culture models are becoming increasingly dependent on modes of single-cell analysis. Methods which deliver composite data reflecting the mean values of biomarkers from cell populations risk losing subpopulation dynamics that reflect the heterogeneity of the studied biological system. In keeping with this, traditional approaches are being replaced by, or supported with, more sophisticated forms of cellular assay developed to allow assessment by high-content microscopy. These assays potentially generate large numbers of images of fluorescent biomarkers, which enabled by accompanying proprietary software packages, allows for multi-parametric measurements per cell. However, the relatively high capital costs and overspecialization of many of these devices have prevented their accessibility to many investigators. Described here is a universally applicable workflow for the quantification of multiple fluorescent marker intensities from specific subcellular regions of individual cells suitable for use with images from most fluorescent microscopes. Key to this workflow is the implementation of the freely available Cell Profiler software1 to distinguish individual cells in these images, segment them into defined subcellular regions and deliver fluorescence marker intensity values specific to these regions. The extraction of individual cell intensity values from image data is the central purpose of this workflow and will be illustrated with the analysis of control data from a siRNA screen for G1 checkpoint regulators in adherent human cells. However, the workflow presented here can be applied to analysis of data from other means of cell perturbation (e.g., compound screens) and other forms of fluorescence based cellular markers and thus should be useful for a wide range of laboratories.
Cellular Biology, Issue 94, Image analysis, High-content analysis, Screening, Microscopy, Individual cell analysis, Multiplexed assays
51882
Play Button
Stereological and Flow Cytometry Characterization of Leukocyte Subpopulations in Models of Transient or Permanent Cerebral Ischemia
Authors: Iván Ballesteros, María Isabel Cuartero, Ana Moraga, Juan de la Parra, Ignacio Lizasoain, María Ángeles Moro.
Institutions: Universidad Complutense de Madrid y Instituto de Investigación Hospital 12 de Octubre, Madrid.
Microglia activation, as well as extravasation of haematogenous macrophages and neutrophils, is believed to play a pivotal role in brain injury after stroke. These myeloid cell subpopulations can display different phenotypes and functions and need to be distinguished and characterized to study their regulation and contribution to tissue damage. This protocol provides two different methodologies for brain immune cell characterization: a precise stereological approach and a flow cytometric analysis. The stereological approach is based on the optical fractionator method, which calculates the total number of cells in an area of interest (infarcted brain) estimated by a systematic random sampling. The second characterization approach provides a simple way to isolate brain leukocyte suspensions and to characterize them by flow cytometry, allowing for the characterization of microglia, infiltrated monocytes and neutrophils of the ischemic tissue. In addition, it also details a cerebral ischemia model in mice that exclusively affects brain cortex, generating highly reproducible infarcts with a low rate of mortality, and the procedure for histological brain processing to characterize infarct volume by the Cavalieri method.
Medicine, Issue 94, Brain ischemia, myeloid cells, middle cerebral artery occlusion (MCAO), stereology, optical fractionator, flow cytometry, infiltration
52031
Play Button
Synthesis and Characterization of Functionalized Metal-organic Frameworks
Authors: Olga Karagiaridi, Wojciech Bury, Amy A. Sarjeant, Joseph T. Hupp, Omar K. Farha.
Institutions: Northwestern University, Warsaw University of Technology, King Abdulaziz University.
Metal-organic frameworks have attracted extraordinary amounts of research attention, as they are attractive candidates for numerous industrial and technological applications. Their signature property is their ultrahigh porosity, which however imparts a series of challenges when it comes to both constructing them and working with them. Securing desired MOF chemical and physical functionality by linker/node assembly into a highly porous framework of choice can pose difficulties, as less porous and more thermodynamically stable congeners (e.g., other crystalline polymorphs, catenated analogues) are often preferentially obtained by conventional synthesis methods. Once the desired product is obtained, its characterization often requires specialized techniques that address complications potentially arising from, for example, guest-molecule loss or preferential orientation of microcrystallites. Finally, accessing the large voids inside the MOFs for use in applications that involve gases can be problematic, as frameworks may be subject to collapse during removal of solvent molecules (remnants of solvothermal synthesis). In this paper, we describe synthesis and characterization methods routinely utilized in our lab either to solve or circumvent these issues. The methods include solvent-assisted linker exchange, powder X-ray diffraction in capillaries, and materials activation (cavity evacuation) by supercritical CO2 drying. Finally, we provide a protocol for determining a suitable pressure region for applying the Brunauer-Emmett-Teller analysis to nitrogen isotherms, so as to estimate surface area of MOFs with good accuracy.
Chemistry, Issue 91, Metal-organic frameworks, porous coordination polymers, supercritical CO2 activation, crystallography, solvothermal, sorption, solvent-assisted linker exchange
52094
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
50564
Play Button
Investigating the Microbial Community in the Termite Hindgut - Interview
Authors: Jared Leadbetter.
Institutions: California Institute of Technology - Caltech.
Jared Leadbetter explains why the termite-gut microbial community is an excellent system for studying the complex interactions between microbes. The symbiotic relationship existing between the host insect and lignocellulose-degrading gut microbes is explained, as well as the industrial uses of these microbes for degrading plant biomass and generating biofuels.
Microbiology, issue 4, microbial community, diversity
196
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Reaggregate Thymus Cultures
Authors: Andrea White, Eric Jenkinson, Graham Anderson.
Institutions: University of Birmingham .
Stromal cells within lymphoid tissues are organized into three-dimensional structures that provide a scaffold that is thought to control the migration and development of haemopoeitic cells. Importantly, the maintenance of this three-dimensional organization appears to be critical for normal stromal cell function, with two-dimensional monolayer cultures often being shown to be capable of supporting only individual fragments of lymphoid tissue function. In the thymus, complex networks of cortical and medullary epithelial cells act as a framework that controls the recruitment, proliferation, differentiation and survival of lymphoid progenitors as they undergo the multi-stage process of intrathymic T-cell development. Understanding the functional role of individual stromal compartments in the thymus is essential in determining how the thymus imposes self/non-self discrimination. Here we describe a technique in which we exploit the plasticity of fetal tissues to re-associate into intact three-dimensional structures in vitro, following their enzymatic disaggregation. The dissociation of fetal thymus lobes into heterogeneous cellular mixtures, followed by their separation into individual cellular components, is then combined with the in vitro re-association of these desired cell types into three-dimensional reaggregate structures at defined ratios, thereby providing an opportunity to investigate particular aspects of T-cell development under defined cellular conditions. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196).
Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development
905
Play Button
Preparation of 2-dGuo-Treated Thymus Organ Cultures
Authors: William Jenkinson, Eric Jenkinson, Graham Anderson.
Institutions: University of Birmingham .
In the thymus, interactions between developing T-cell precursors and stromal cells that include cortical and medullary epithelial cells are known to play a key role in the development of a functionally competent T-cell pool. However, the complexity of T-cell development in the thymus in vivo can limit analysis of individual cellular components and particular stages of development. In vitro culture systems provide a readily accessible means to study multiple complex cellular processes. Thymus organ culture systems represent a widely used approach to study intrathymic development of T-cells under defined conditions in vitro. Here we describe a system in which mouse embryonic thymus lobes can be depleted of endogenous haemopoeitic elements by prior organ culture in 2-deoxyguanosine, a compound that is selectively toxic to haemopoeitic cells. As well as providing a readily accessible source of thymic stromal cells to investigate the role of thymic microenvironments in the development and selection of T-cells, this technique also underpins further experimental approaches that include the reconstitution of alymphoid thymus lobes in vitro with defined haemopoietic elements, the transplantation of alymphoid thymuses into recipient mice, and the formation of reaggregate thymus organ cultures. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196).
Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development
906
Play Button
Determining the Reactivity and Titre of Serum using a Haemagglutination Assay
Authors: Maurizio Costabile.
Institutions: University of South Australia.
Haemagglutination is a specific form of agglutination and is used when antibodies bind to red blood cells, which act as a particulate antigen. Red blood cells are particularly useful targets as they are readily available and agglutination is observable using the naked eye. This technique is commonly used to determine the titre of an antibody (Ab), for blood grouping and viral quantification. In this video, the steps involved in preparing and performing a haemagglutination assay is demonstrated using antibodies specific to blood group A-antigens added to red blood cells (Revercells). The antiserum is serially diluted in a 96 well U-bottom microtitre tray, to which is added a suspension of Revercells. The samples are mixed and then incubated at 37°C for 60 minutes. After this time, the samples can then be easily scored for ve, +ve and intermediate (-/+) haemagglutination reactions. This approach allows for the reactivity and titre of a serum sample to be assessed using a rapid and simple technique. The video will cover the theory behind the assay, how the results are read and interpreted, how the titre is determined, how the assay can be modified and any issues associated with the use of this technique.
JoVE Immunology, Issue 35, Haemagglutination, Titre, Reactivity, Ag-Ab complex
1752
Play Button
Measuring the 50% Haemolytic Complement (CH50) Activity of Serum
Authors: Maurizio Costabile.
Institutions: University of South Australia.
The complement system is a group of proteins that when activated lead to target cell lysis and facilitates phagocytosis through opsonisation. Individual complement components can be quantified however this does not provide any information as to the activity of the pathway. The CH50 is a screening assay for the activation of the classical complement pathway (Fig 1) and it is sensitive to the reduction, absence and/or inactivity of any component of the pathway. The CH50 tests the functional capability of serum complement components of the classical pathway to lyse sheep red blood cells (SRBC) pre-coated with rabbit anti-sheep red blood cell antibody (haemolysin). When antibody-coated SRBC are incubated with test serum, the classical pathway of complement is activated and haemolysis results. If a complement component is absent, the CH50 level will be zero; if one or more components of the classical pathway are decreased, the CH50 will be decreased. A fixed volume of optimally sensitised SRBC is added to each serum dilution. After incubation, the mixture is centrifuged and the degree of haemolysis is quantified by measuring the absorbance of the haemoglobin released into the supernatant at 540nm. The amount of complement activity is determined by examining the capacity of various dilutions of test serum to lyse antibody coated SRBC. This video outlines the experimental steps involved in analysing the level of complement activity of the classical complement pathway.
Immunology, Issue 37, Classical pathway, Complement, Haemolysis, sheep red blood cells, haemoglobin
1923
Play Button
Using Learning Outcome Measures to assess Doctoral Nursing Education
Authors: Glenn H. Raup, Jeff King, Romana J. Hughes, Natasha Faidley.
Institutions: Harris College of Nursing and Health Sciences, Texas Christian University.
Education programs at all levels must be able to demonstrate successful program outcomes. Grades alone do not represent a comprehensive measurement methodology for assessing student learning outcomes at either the course or program level. The development and application of assessment rubrics provides an unequivocal measurement methodology to ensure a quality learning experience by providing a foundation for improvement based on qualitative and quantitatively measurable, aggregate course and program outcomes. Learning outcomes are the embodiment of the total learning experience and should incorporate assessment of both qualitative and quantitative program outcomes. The assessment of qualitative measures represents a challenge for educators in any level of a learning program. Nursing provides a unique challenge and opportunity as it is the application of science through the art of caring. Quantification of desired student learning outcomes may be enhanced through the development of assessment rubrics designed to measure quantitative and qualitative aspects of the nursing education and learning process. They provide a mechanism for uniform assessment by nursing faculty of concepts and constructs that are otherwise difficult to describe and measure. A protocol is presented and applied to a doctoral nursing education program with recommendations for application and transformation of the assessment rubric to other education programs. Through application of these specially designed rubrics, all aspects of an education program can be adequately assessed to provide information for program assessment that facilitates the closure of the gap between desired and actual student learning outcomes for any desired educational competency.
Medicine, Issue 40, learning, outcomes, measurement, program, assessment, rubric
2048
Play Button
Strategies for Study of Neuroprotection from Cold-preconditioning
Authors: Heidi M. Mitchell, David M. White, Richard P. Kraig.
Institutions: The University of Chicago Medical Center.
Neurological injury is a frequent cause of morbidity and mortality from general anesthesia and related surgical procedures that could be alleviated by development of effective, easy to administer and safe preconditioning treatments. We seek to define the neural immune signaling responsible for cold-preconditioning as means to identify novel targets for therapeutics development to protect brain before injury onset. Low-level pro-inflammatory mediator signaling changes over time are essential for cold-preconditioning neuroprotection. This signaling is consistent with the basic tenets of physiological conditioning hormesis, which require that irritative stimuli reach a threshold magnitude with sufficient time for adaptation to the stimuli for protection to become evident. Accordingly, delineation of the immune signaling involved in cold-preconditioning neuroprotection requires that biological systems and experimental manipulations plus technical capacities are highly reproducible and sensitive. Our approach is to use hippocampal slice cultures as an in vitro model that closely reflects their in vivo counterparts with multi-synaptic neural networks influenced by mature and quiescent macroglia / microglia. This glial state is particularly important for microglia since they are the principal source of cytokines, which are operative in the femtomolar range. Also, slice cultures can be maintained in vitro for several weeks, which is sufficient time to evoke activating stimuli and assess adaptive responses. Finally, environmental conditions can be accurately controlled using slice cultures so that cytokine signaling of cold-preconditioning can be measured, mimicked, and modulated to dissect the critical node aspects. Cytokine signaling system analyses require the use of sensitive and reproducible multiplexed techniques. We use quantitative PCR for TNF-α to screen for microglial activation followed by quantitative real-time qPCR array screening to assess tissue-wide cytokine changes. The latter is a most sensitive and reproducible means to measure multiple cytokine system signaling changes simultaneously. Significant changes are confirmed with targeted qPCR and then protein detection. We probe for tissue-based cytokine protein changes using multiplexed microsphere flow cytometric assays using Luminex technology. Cell-specific cytokine production is determined with double-label immunohistochemistry. Taken together, this brain tissue preparation and style of use, coupled to the suggested investigative strategies, may be an optimal approach for identifying potential targets for the development of novel therapeutics that could mimic the advantages of cold-preconditioning.
Neuroscience, Issue 43, innate immunity, hormesis, microglia, hippocampus, slice culture, immunohistochemistry, neural-immune, gene expression, real-time PCR
2192
Play Button
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Authors: Amir Karniel, Guy Avraham, Bat-Chen Peles, Shelly Levy-Tzedek, Ilana Nisky.
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
2492
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
3007
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
3064
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Adapting Human Videofluoroscopic Swallow Study Methods to Detect and Characterize Dysphagia in Murine Disease Models
Authors: Teresa E. Lever, Sabrina M. Braun, Ryan T. Brooks, Rebecca A. Harris, Loren L. Littrell, Ryan M. Neff, Cameron J. Hinkel, Mitchell J. Allen, Mollie A. Ulsas.
Institutions: University of Missouri, University of Missouri, University of Missouri.
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e., high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models.
Medicine, Issue 97, mouse, murine, rodent, swallowing, deglutition, dysphagia, videofluoroscopy, radiation, iohexol, barium, palatability, taste, translational, disease models
52319
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.