JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Using workflows to explore and optimise named entity recognition for chemistry.
PLoS ONE
PUBLISHED: 04-27-2011
Chemistry text mining tools should be interoperable and adaptable regardless of system-level implementation, installation or even programming issues. We aim to abstract the functionality of these tools from the underlying implementation via reconfigurable workflows for automatically identifying chemical names. To achieve this, we refactored an established named entity recogniser (in the chemistry domain), OSCAR and studied the impact of each component on the net performance. We developed two reconfigurable workflows from OSCAR using an interoperable text mining framework, U-Compare. These workflows can be altered using the drag-&-drop mechanism of the graphical user interface of U-Compare. These workflows also provide a platform to study the relationship between text mining components such as tokenisation and named entity recognition (using maximum entropy Markov model (MEMM) and pattern recognition based classifiers). Results indicate that, for chemistry in particular, eliminating noise generated by tokenisation techniques lead to a slightly better performance than others, in terms of named entity recognition (NER) accuracy. Poor tokenisation translates into poorer input to the classifier components which in turn leads to an increase in Type I or Type II errors, thus, lowering the overall performance. On the Sciborg corpus, the workflow based system, which uses a new tokeniser whilst retaining the same MEMM component, increases the F-score from 82.35% to 84.44%. On the PubMed corpus, it recorded an F-score of 84.84% as against 84.23% by OSCAR.
ABSTRACT
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
25 Related JoVE Articles!
Play Button
Genomic MRI - a Public Resource for Studying Sequence Patterns within Genomic DNA
Authors: Ashwin Prakash, Jason Bechtel, Alexei Fedorov.
Institutions: University of Toledo Health Science Campus.
Non-coding genomic regions in complex eukaryotes, including intergenic areas, introns, and untranslated segments of exons, are profoundly non-random in their nucleotide composition and consist of a complex mosaic of sequence patterns. These patterns include so-called Mid-Range Inhomogeneity (MRI) regions -- sequences 30-10000 nucleotides in length that are enriched by a particular base or combination of bases (e.g. (G+T)-rich, purine-rich, etc.). MRI regions are associated with unusual (non-B-form) DNA structures that are often involved in regulation of gene expression, recombination, and other genetic processes (Fedorova & Fedorov 2010). The existence of a strong fixation bias within MRI regions against mutations that tend to reduce their sequence inhomogeneity additionally supports the functionality and importance of these genomic sequences (Prakash et al. 2009). Here we demonstrate a freely available Internet resource -- the Genomic MRI program package -- designed for computational analysis of genomic sequences in order to find and characterize various MRI patterns within them (Bechtel et al. 2008). This package also allows generation of randomized sequences with various properties and level of correspondence to the natural input DNA sequences. The main goal of this resource is to facilitate examination of vast regions of non-coding DNA that are still scarcely investigated and await thorough exploration and recognition.
Genetics, Issue 51, bioinformatics, computational biology, genomics, non-randomness, signals, gene regulation, DNA conformation
2663
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
52111
Play Button
Sequence-specific Labeling of Nucleic Acids and Proteins with Methyltransferases and Cofactor Analogues
Authors: Gisela Maria Hanz, Britta Jung, Anna Giesbertz, Matyas Juhasz, Elmar Weinhold.
Institutions: RWTH Aachen University.
S-Adenosyl-l-methionine (AdoMet or SAM)-dependent methyltransferases (MTase) catalyze the transfer of the activated methyl group from AdoMet to specific positions in DNA, RNA, proteins and small biomolecules. This natural methylation reaction can be expanded to a wide variety of alkylation reactions using synthetic cofactor analogues. Replacement of the reactive sulfonium center of AdoMet with an aziridine ring leads to cofactors which can be coupled with DNA by various DNA MTases. These aziridine cofactors can be equipped with reporter groups at different positions of the adenine moiety and used for Sequence-specific Methyltransferase-Induced Labeling of DNA (SMILing DNA). As a typical example we give a protocol for biotinylation of pBR322 plasmid DNA at the 5’-ATCGAT-3’ sequence with the DNA MTase M.BseCI and the aziridine cofactor 6BAz in one step. Extension of the activated methyl group with unsaturated alkyl groups results in another class of AdoMet analogues which are used for methyltransferase-directed Transfer of Activated Groups (mTAG). Since the extended side chains are activated by the sulfonium center and the unsaturated bond, these cofactors are called double-activated AdoMet analogues. These analogues not only function as cofactors for DNA MTases, like the aziridine cofactors, but also for RNA, protein and small molecule MTases. They are typically used for enzymatic modification of MTase substrates with unique functional groups which are labeled with reporter groups in a second chemical step. This is exemplified in a protocol for fluorescence labeling of histone H3 protein. A small propargyl group is transferred from the cofactor analogue SeAdoYn to the protein by the histone H3 lysine 4 (H3K4) MTase Set7/9 followed by click labeling of the alkynylated histone H3 with TAMRA azide. MTase-mediated labeling with cofactor analogues is an enabling technology for many exciting applications including identification and functional study of MTase substrates as well as DNA genotyping and methylation detection.
Biochemistry, Issue 93, S-adenosyl-l-methionine, AdoMet, SAM, aziridine cofactor, double activated cofactor, methyltransferase, DNA methylation, protein methylation, biotin labeling, fluorescence labeling, SMILing, mTAG
52014
Play Button
A Step Beyond BRET: Fluorescence by Unbound Excitation from Luminescence (FUEL)
Authors: Joseph Dragavon, Carolyn Sinow, Alexandra D. Holland, Abdessalem Rekiki, Ioanna Theodorou, Chelsea Samson, Samantha Blazquez, Kelly L. Rogers, Régis Tournebize, Spencer L. Shorte.
Institutions: Institut Pasteur, Stanford School of Medicine, Institut d'Imagerie Biomédicale, Vanderbilt School of Medicine, The Walter & Eliza Hall Institute of Medical Research, Institut Pasteur, Institut Pasteur.
Fluorescence by Unbound Excitation from Luminescence (FUEL) is a radiative excitation-emission process that produces increased signal and contrast enhancement in vitro and in vivo. FUEL shares many of the same underlying principles as Bioluminescence Resonance Energy Transfer (BRET), yet greatly differs in the acceptable working distances between the luminescent source and the fluorescent entity. While BRET is effectively limited to a maximum of 2 times the Förster radius, commonly less than 14 nm, FUEL can occur at distances up to µm or even cm in the absence of an optical absorber. Here we expand upon the foundation and applicability of FUEL by reviewing the relevant principles behind the phenomenon and demonstrate its compatibility with a wide variety of fluorophores and fluorescent nanoparticles. Further, the utility of antibody-targeted FUEL is explored. The examples shown here provide evidence that FUEL can be utilized for applications where BRET is not possible, filling the spatial void that exists between BRET and traditional whole animal imaging.
Bioengineering, Issue 87, Biochemical Phenomena, Biochemical Processes, Energy Transfer, Fluorescence Resonance Energy Transfer (FRET), FUEL, BRET, CRET, Förster, bioluminescence, In vivo
51549
Play Button
Large Scale Non-targeted Metabolomic Profiling of Serum by Ultra Performance Liquid Chromatography-Mass Spectrometry (UPLC-MS)
Authors: Corey D. Broeckling, Adam L. Heuberger, Jessica E. Prenni.
Institutions: Colorado State University.
Non-targeted metabolite profiling by ultra performance liquid chromatography coupled with mass spectrometry (UPLC-MS) is a powerful technique to investigate metabolism. The approach offers an unbiased and in-depth analysis that can enable the development of diagnostic tests, novel therapies, and further our understanding of disease processes. The inherent chemical diversity of the metabolome creates significant analytical challenges and there is no single experimental approach that can detect all metabolites. Additionally, the biological variation in individual metabolism and the dependence of metabolism on environmental factors necessitates large sample numbers to achieve the appropriate statistical power required for meaningful biological interpretation. To address these challenges, this tutorial outlines an analytical workflow for large scale non-targeted metabolite profiling of serum by UPLC-MS. The procedure includes guidelines for sample organization and preparation, data acquisition, quality control, and metabolite identification and will enable reliable acquisition of data for large experiments and provide a starting point for laboratories new to non-targeted metabolite profiling by UPLC-MS.
Chemistry, Issue 73, Biochemistry, Genetics, Molecular Biology, Physiology, Genomics, Proteins, Proteomics, Metabolomics, Metabolite Profiling, Non-targeted metabolite profiling, mass spectrometry, Ultra Performance Liquid Chromatography, UPLC-MS, serum, spectrometry
50242
Play Button
Whole-cell MALDI-TOF Mass Spectrometry is an Accurate and Rapid Method to Analyze Different Modes of Macrophage Activation
Authors: Richard Ouedraogo, Aurélie Daumas, Christian Capo, Jean-Louis Mege, Julien Textoris.
Institutions: Aix Marseille Université, Hôpital de la Timone.
MALDI-TOF is an extensively used mass spectrometry technique in chemistry and biochemistry. It has been also applied in medicine to identify molecules and biomarkers. Recently, it has been used in microbiology for the routine identification of bacteria grown from clinical samples, without preparation or fractionation steps. We and others have applied this whole-cell MALDI-TOF mass spectrometry technique successfully to eukaryotic cells. Current applications range from cell type identification to quality control assessment of cell culture and diagnostic applications. Here, we describe its use to explore the various polarization phenotypes of macrophages in response to cytokines or heat-killed bacteria. It allowed the identification of macrophage-specific fingerprints that are representative of the diversity of proteomic responses of macrophages. This application illustrates the accuracy and simplicity of the method. The protocol we described here may be useful for studying the immune host response in pathological conditions or may be extended to wider diagnostic applications.
Immunology, Issue 82, MALDI-TOF, mass spectrometry, fingerprint, Macrophages, activation, IFN-g, TNF, LPS, IL-4, bacterial pathogens
50926
Play Button
Isolation and Chemical Characterization of Lipid A from Gram-negative Bacteria
Authors: Jeremy C. Henderson, John P. O'Brien, Jennifer S. Brodbelt, M. Stephen Trent.
Institutions: The University of Texas at Austin, The University of Texas at Austin, The University of Texas at Austin.
Lipopolysaccharide (LPS) is the major cell surface molecule of gram-negative bacteria, deposited on the outer leaflet of the outer membrane bilayer. LPS can be subdivided into three domains: the distal O-polysaccharide, a core oligosaccharide, and the lipid A domain consisting of a lipid A molecular species and 3-deoxy-D-manno-oct-2-ulosonic acid residues (Kdo). The lipid A domain is the only component essential for bacterial cell survival. Following its synthesis, lipid A is chemically modified in response to environmental stresses such as pH or temperature, to promote resistance to antibiotic compounds, and to evade recognition by mediators of the host innate immune response. The following protocol details the small- and large-scale isolation of lipid A from gram-negative bacteria. Isolated material is then chemically characterized by thin layer chromatography (TLC) or mass-spectrometry (MS). In addition to matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF) MS, we also describe tandem MS protocols for analyzing lipid A molecular species using electrospray ionization (ESI) coupled to collision induced dissociation (CID) and newly employed ultraviolet photodissociation (UVPD) methods. Our MS protocols allow for unequivocal determination of chemical structure, paramount to characterization of lipid A molecules that contain unique or novel chemical modifications. We also describe the radioisotopic labeling, and subsequent isolation, of lipid A from bacterial cells for analysis by TLC. Relative to MS-based protocols, TLC provides a more economical and rapid characterization method, but cannot be used to unambiguously assign lipid A chemical structures without the use of standards of known chemical structure. Over the last two decades isolation and characterization of lipid A has led to numerous exciting discoveries that have improved our understanding of the physiology of gram-negative bacteria, mechanisms of antibiotic resistance, the human innate immune response, and have provided many new targets in the development of antibacterial compounds.
Chemistry, Issue 79, Membrane Lipids, Toll-Like Receptors, Endotoxins, Glycolipids, Lipopolysaccharides, Lipid A, Microbiology, Lipids, lipid A, Bligh-Dyer, thin layer chromatography (TLC), lipopolysaccharide, mass spectrometry, Collision Induced Dissociation (CID), Photodissociation (PD)
50623
Play Button
Activation and Measurement of NLRP3 Inflammasome Activity Using IL-1β in Human Monocyte-derived Dendritic Cells
Authors: Melissa V. Fernandez, Elizabeth A. Miller, Nina Bhardwaj.
Institutions: New York University School of Medicine, Mount Sinai Medical Center, Mount Sinai Medical Center.
Inflammatory processes resulting from the secretion of Interleukin (IL)-1 family cytokines by immune cells lead to local or systemic inflammation, tissue remodeling and repair, and virologic control1,2 . Interleukin-1β is an essential element of the innate immune response and contributes to eliminate invading pathogens while preventing the establishment of persistent infection1-5. Inflammasomes are the key signaling platform for the activation of interleukin 1 converting enzyme (ICE or Caspase-1). The NLRP3 inflammasome requires at least two signals in DCs to cause IL-1β secretion6. Pro-IL-1β protein expression is limited in resting cells; therefore a priming signal is required for IL-1β transcription and protein expression. A second signal sensed by NLRP3 results in the formation of the multi-protein NLRP3 inflammasome. The ability of dendritic cells to respond to the signals required for IL-1β secretion can be tested using a synthetic purine, R848, which is sensed by TLR8 in human monocyte derived dendritic cells (moDCs) to prime cells, followed by activation of the NLRP3 inflammasome with the bacterial toxin and potassium ionophore, nigericin. Monocyte derived DCs are easily produced in culture and provide significantly more cells than purified human myeloid DCs. The method presented here differs from other inflammasome assays in that it uses in vitro human, instead of mouse derived, DCs thus allowing for the study of the inflammasome in human disease and infection.
Immunology, Issue 87, NLRP3, inflammasome, IL-1beta, Interleukin-1 beta, dendritic, cell, Nigericin, Toll-Like Receptor 8, TLR8, R848, Monocyte Derived Dendritic Cells
51284
Play Button
Untargeted Metabolomics from Biological Sources Using Ultraperformance Liquid Chromatography-High Resolution Mass Spectrometry (UPLC-HRMS)
Authors: Nathaniel W. Snyder, Maya Khezam, Clementina A. Mesaros, Andrew Worth, Ian A. Blair.
Institutions: University of Pennsylvania .
Here we present a workflow to analyze the metabolic profiles for biological samples of interest including; cells, serum, or tissue. The sample is first separated into polar and non-polar fractions by a liquid-liquid phase extraction, and partially purified to facilitate downstream analysis. Both aqueous (polar metabolites) and organic (non-polar metabolites) phases of the initial extraction are processed to survey a broad range of metabolites. Metabolites are separated by different liquid chromatography methods based upon their partition properties. In this method, we present microflow ultra-performance (UP)LC methods, but the protocol is scalable to higher flows and lower pressures. Introduction into the mass spectrometer can be through either general or compound optimized source conditions. Detection of a broad range of ions is carried out in full scan mode in both positive and negative mode over a broad m/z range using high resolution on a recently calibrated instrument. Label-free differential analysis is carried out on bioinformatics platforms. Applications of this approach include metabolic pathway screening, biomarker discovery, and drug development.
Biochemistry, Issue 75, Chemistry, Molecular Biology, Cellular Biology, Physiology, Medicine, Pharmacology, Genetics, Genomics, Mass Spectrometry, MS, Metabolism, Metabolomics, untargeted, extraction, lipids, accurate mass, liquid chromatography, ultraperformance liquid chromatography, UPLC, high resolution mass spectrometry, HRMS, spectrometry
50433
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
3358
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
50975
Play Button
Protease- and Acid-catalyzed Labeling Workflows Employing 18O-enriched Water
Authors: Diana Klingler, Markus Hardt.
Institutions: Boston Biomedical Research Institute.
Stable isotopes are essential tools in biological mass spectrometry. Historically, 18O-stable isotopes have been extensively used to study the catalytic mechanisms of proteolytic enzymes1-3. With the advent of mass spectrometry-based proteomics, the enzymatically-catalyzed incorporation of 18O-atoms from stable isotopically enriched water has become a popular method to quantitatively compare protein expression levels (reviewed by Fenselau and Yao4, Miyagi and Rao5 and Ye et al.6). 18O-labeling constitutes a simple and low-cost alternative to chemical (e.g. iTRAQ, ICAT) and metabolic (e.g. SILAC) labeling techniques7. Depending on the protease utilized, 18O-labeling can result in the incorporation of up to two 18O-atoms in the C-terminal carboxyl group of the cleavage product3. The labeling reaction can be subdivided into two independent processes, the peptide bond cleavage and the carboxyl oxygen exchange reaction8. In our PALeO (protease-assisted labeling employing 18O-enriched water) adaptation of enzymatic 18O-labeling, we utilized 50% 18O-enriched water to yield distinctive isotope signatures. In combination with high-resolution matrix-assisted laser desorption ionization time-of-flight tandem mass spectrometry (MALDI-TOF/TOF MS/MS), the characteristic isotope envelopes can be used to identify cleavage products with a high level of specificity. We previously have used the PALeO-methodology to detect and characterize endogenous proteases9 and monitor proteolytic reactions10-11. Since PALeO encodes the very essence of the proteolytic cleavage reaction, the experimental setup is simple and biochemical enrichment steps of cleavage products can be circumvented. The PALeO-method can easily be extended to (i) time course experiments that monitor the dynamics of proteolytic cleavage reactions and (ii) the analysis of proteolysis in complex biological samples that represent physiological conditions. PALeO-TimeCourse experiments help identifying rate-limiting processing steps and reaction intermediates in complex proteolytic pathway reactions. Furthermore, the PALeO-reaction allows us to identify proteolytic enzymes such as the serine protease trypsin that is capable to rebind its cleavage products and catalyze the incorporation of a second 18O-atom. Such "double-labeling" enzymes can be used for postdigestion 18O-labeling, in which peptides are exclusively labeled by the carboxyl oxygen exchange reaction. Our third strategy extends labeling employing 18O-enriched water beyond enzymes and uses acidic pH conditions to introduce 18O-stable isotope signatures into peptides.
Biochemistry, Issue 72, Molecular Biology, Proteins, Proteomics, Chemistry, Physics, MALDI-TOF mass spectrometry, proteomics, proteolysis, quantification, stable isotope labeling, labeling, catalyst, peptides, 18-O enriched water
3891
Play Button
Rapid Analysis and Exploration of Fluorescence Microscopy Images
Authors: Benjamin Pavie, Satwik Rajaram, Austin Ouyang, Jason M. Altschuler, Robert J. Steininger III, Lani F. Wu, Steven J. Altschuler.
Institutions: UT Southwestern Medical Center, UT Southwestern Medical Center, Princeton University.
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
Basic Protocol, Issue 85, PhenoRipper, fluorescence microscopy, image analysis, High-content analysis, high-throughput screening, Open-source, Phenotype
51280
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Reduced-gravity Environment Hardware Demonstrations of a Prototype Miniaturized Flow Cytometer and Companion Microfluidic Mixing Technology
Authors: William S. Phipps, Zhizhong Yin, Candice Bae, Julia Z. Sharpe, Andrew M. Bishara, Emily S. Nelson, Aaron S. Weaver, Daniel Brown, Terri L. McKay, DeVon Griffin, Eugene Y. Chan.
Institutions: DNA Medicine Institute, Harvard Medical School, NASA Glenn Research Center, ZIN Technologies.
Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.
Cellular Biology, Issue 93, Point-of-care, prototype, diagnostics, spaceflight, reduced gravity, parabolic flight, flow cytometry, fluorescence, cell counting, micromixing, spiral-vortex, blood mixing
51743
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Authors: Christopher J. Zopf, Narendra Maheshri.
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
50456
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
A Chemical Screening Procedure for Glucocorticoid Signaling with a Zebrafish Larva Luciferase Reporter System
Authors: Benjamin D. Weger, Meltem Weger, Nicole Jung, Christin Lederer, Stefan Bräse, Thomas Dickmeis.
Institutions: Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus North, Karlsruhe Institute of Technology - Campus South.
Glucocorticoid stress hormones and their artificial derivatives are widely used drugs to treat inflammation, but long-term treatment with glucocorticoids can lead to severe side effects. Test systems are needed to search for novel compounds influencing glucocorticoid signaling in vivo or to determine unwanted effects of compounds on the glucocorticoid signaling pathway. We have established a transgenic zebrafish assay which allows the measurement of glucocorticoid signaling activity in vivo and in real-time, the GRIZLY assay (Glucocorticoid Responsive In vivo Zebrafish Luciferase activitY). The luciferase-based assay detects effects on glucocorticoid signaling with high sensitivity and specificity, including effects by compounds that require metabolization or affect endogenous glucocorticoid production. We present here a detailed protocol for conducting chemical screens with this assay. We describe data acquisition, normalization, and analysis, placing a focus on quality control and data visualization. The assay provides a simple, time-resolved, and quantitative readout. It can be operated as a stand-alone platform, but is also easily integrated into high-throughput screening workflows. It furthermore allows for many applications beyond chemical screening, such as environmental monitoring of endocrine disruptors or stress research.
Developmental Biology, Issue 79, Biochemistry, Vertebrates, Zebrafish, environmental effects (biological and animal), genetics (animal), life sciences, animal biology, animal models, biochemistry, bioengineering (general), Hormones, Hormone Substitutes, and Hormone Antagonists, zebrafish, Danio rerio, chemical screening, luciferase, glucocorticoid, stress, high-throughput screening, receiver operating characteristic curve, in vivo, animal model
50439
Play Button
Major Components of the Light Microscope
Authors: Victoria Centonze Frohlich.
Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA).
The light microscope is a basic tool for the cell biologist, who should have a thorough understanding of how it works, how it should be aligned for different applications, and how it should be maintained as required to obtain maximum image-forming capacity and resolution. The components of the microscope are described in detail here.
Basic Protocols, Issue 17, Current Protocols Wiley, Microscopy, Objectives, Condenser, Eyepiece
843
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Immunoblot Analysis
Authors: Sean Gallagher, Deb Chakavarti.
Institutions: UVP, LLC, Keck Graduate Institute of Applied Life Sciences.
Immunoblotting (western blotting) is a rapid and sensitive assay for the detection and characterization of proteins that works by exploiting the specificity inherent in antigen-antibody recognition. It involves the solubilization and electrophoretic separation of proteins, glycoproteins, or lipopolysaccharides by gel electrophoresis, followed by quantitative transfer and irreversible binding to nitrocellulose, PVDF, or nylon. The immunoblotting technique has been useful in identifying specific antigens recognized by polyclonal or monoclonal antibodies and is highly sensitive (1 ng of antigen can be detected). This unit provides protocols for protein separation, blotting proteins onto membranes, immunoprobing, and visualization using chromogenic or chemiluminescent substrates.
Basic Protocols, Issue 16, Current Protocols Wiley, Immunoblotting, Biochemistry, Western Blotting, chromogenic substrates, chemiluminescent substrates, protein detection.
759
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.