JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Accelerating the Gillespie ?-Leaping Method using graphics processing units.
PLoS ONE
The Gillespie ?-Leaping Method is an approximate algorithm that is faster than the exact Direct Method (DM) due to the progression of the simulation with larger time steps. However, the procedure to compute the time leap ? is quite expensive. In this paper, we explore the acceleration of the ?-Leaping Method using Graphics Processing Unit (GPUs) for ultra-large networks (>0.5e(6) reaction channels). We have developed data structures and algorithms that take advantage of the unique hardware architecture and available libraries. Our results show that we obtain a performance gain of over 60x when compared with the best conventional implementations.
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Published: 11-02-2012
ABSTRACT
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
24 Related JoVE Articles!
Play Button
Development of automated imaging and analysis for zebrafish chemical screens.
Authors: Andreas Vogt, Hiba Codore, Billy W. Day, Neil A. Hukriede, Michael Tsang.
Institutions: University of Pittsburgh Drug Discovery Institute, University of Pittsburgh, University of Pittsburgh, University of Pittsburgh.
We demonstrate the application of image-based high-content screening (HCS) methodology to identify small molecules that can modulate the FGF/RAS/MAPK pathway in zebrafish embryos. The zebrafish embryo is an ideal system for in vivo high-content chemical screens. The 1-day old embryo is approximately 1mm in diameter and can be easily arrayed into 96-well plates, a standard format for high throughput screening. During the first day of development, embryos are transparent with most of the major organs present, thus enabling visualization of tissue formation during embryogenesis. The complete automation of zebrafish chemical screens is still a challenge, however, particularly in the development of automated image acquisition and analysis. We previously generated a transgenic reporter line that expresses green fluorescent protein (GFP) under the control of FGF activity and demonstrated their utility in chemical screens 1. To establish methodology for high throughput whole organism screens, we developed a system for automated imaging and analysis of zebrafish embryos at 24-48 hours post fertilization (hpf) in 96-well plates 2. In this video we highlight the procedures for arraying transgenic embryos into multiwell plates at 24hpf and the addition of a small molecule (BCI) that hyperactivates FGF signaling 3. The plates are incubated for 6 hours followed by the addition of tricaine to anesthetize larvae prior to automated imaging on a Molecular Devices ImageXpress Ultra laser scanning confocal HCS reader. Images are processed by Definiens Developer software using a Cognition Network Technology algorithm that we developed to detect and quantify expression of GFP in the heads of transgenic embryos. In this example we highlight the ability of the algorithm to measure dose-dependent effects of BCI on GFP reporter gene expression in treated embryos.
Cellular Biology, Issue 40, Zebrafish, Chemical Screens, Cognition Network Technology, Fibroblast Growth Factor, (E)-2-benzylidene-3-(cyclohexylamino)-2,3-dihydro-1H-inden-1-one (BCI),Tg(dusp6:d2EGFP)
1900
Play Button
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Authors: Fijoy Vadakkumpadan, Hermenegild Arevalo, Natalia A. Trayanova.
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via semiautomatic segmentation, from an in vivo computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
50125
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Manufacturing of Three-dimensionally Microstructured Nanocomposites through Microfluidic Infiltration
Authors: Rouhollah Dermanaki-Farahani, Louis Laberge Lebel, Daniel Therriault.
Institutions: École Polytechnique de Montréal.
Microstructured composite beams reinforced with complex three-dimensionally (3D) patterned nanocomposite microfilaments are fabricated via nanocomposite infiltration of 3D interconnected microfluidic networks. The manufacturing of the reinforced beams begins with the fabrication of microfluidic networks, which involves layer-by-layer deposition of fugitive ink filaments using a dispensing robot, filling the empty space between filaments using a low viscosity resin, curing the resin and finally removing the ink. Self-supported 3D structures with other geometries and many layers (e.g. a few hundreds layers) could be built using this method. The resulting tubular microfluidic networks are then infiltrated with thermosetting nanocomposite suspensions containing nanofillers (e.g. single-walled carbon nanotubes), and subsequently cured. The infiltration is done by applying a pressure gradient between two ends of the empty network (either by applying a vacuum or vacuum-assisted microinjection). Prior to the infiltration, the nanocomposite suspensions are prepared by dispersing nanofillers into polymer matrices using ultrasonication and three-roll mixing methods. The nanocomposites (i.e. materials infiltrated) are then solidified under UV exposure/heat cure, resulting in a 3D-reinforced composite structure. The technique presented here enables the design of functional nanocomposite macroscopic products for microengineering applications such as actuators and sensors.
Chemistry, Issue 85, Microstructures, Nanocomposites, 3D-patterning, Infiltration, Direct-write assembly, Microfluidic networks
51512
Play Button
Bottom-up and Shotgun Proteomics to Identify a Comprehensive Cochlear Proteome
Authors: Lancia N.F. Darville, Bernd H.A. Sokolowski.
Institutions: University of South Florida.
Proteomics is a commonly used approach that can provide insights into complex biological systems. The cochlear sensory epithelium contains receptors that transduce the mechanical energy of sound into an electro-chemical energy processed by the peripheral and central nervous systems. Several proteomic techniques have been developed to study the cochlear inner ear, such as two-dimensional difference gel electrophoresis (2D-DIGE), antibody microarray, and mass spectrometry (MS). MS is the most comprehensive and versatile tool in proteomics and in conjunction with separation methods can provide an in-depth proteome of biological samples. Separation methods combined with MS has the ability to enrich protein samples, detect low molecular weight and hydrophobic proteins, and identify low abundant proteins by reducing the proteome dynamic range. Different digestion strategies can be applied to whole lysate or to fractionated protein lysate to enhance peptide and protein sequence coverage. Utilization of different separation techniques, including strong cation exchange (SCX), reversed-phase (RP), and gel-eluted liquid fraction entrapment electrophoresis (GELFrEE) can be applied to reduce sample complexity prior to MS analysis for protein identification.
Biochemistry, Issue 85, Cochlear, chromatography, LC-MS/MS, mass spectrometry, Proteomics, sensory epithelium
51186
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Preparation and Pathogen Inactivation of Double Dose Buffy Coat Platelet Products using the INTERCEPT Blood System
Authors: Mohammad R. Abedi, Ann-Charlotte Doverud.
Institutions: Örebro University Hospital.
Blood centers are faced with many challenges including maximizing production yield from the blood product donations they receive as well as ensuring the highest possible level of safety for transfusion patients, including protection from transfusion transmitted diseases. This must be accomplished in a fiscally responsible manner which minimizes operating expenses including consumables, equipment, waste, and personnel costs, among others. Several methods are available to produce platelet concentrates for transfusion. One of the most common is the buffy coat method in which a single therapeutic platelet unit (≥ 2.0 x1011 platelets per unit or per local regulations) is prepared by pooling the buffy coat layer from up to six whole blood donations. A procedure for producing "double dose" whole blood derived platelets has only recently been developed. Presented here is a novel method for preparing double dose whole blood derived platelet concentrates from pools of 7 buffy coats and subsequently treating the double dose units with the INTERCEPT Blood System for pathogen inactivation. INTERCEPT was developed to inactivate viruses, bacteria, parasites, and contaminating donor white cells which may be present in donated blood. Pairing INTERCEPT with the double dose buffy coat method by utilizing the INTERCEPT Processing Set with Dual Storage Containers (the "DS set"), allows blood centers to treat each of their double dose units in a single pathogen inactivation processing set, thereby maximizing patient safety while minimizing costs. The double dose buffy coat method requires fewer buffy coats and reduces the use of consumables by up to 50% (e.g. pooling sets, filter sets, platelet additive solution, and sterile connection wafers) compared to preparation and treatment of single dose buffy coat platelet units. Other cost savings include less waste, less equipment maintenance, lower power requirements, reduced personnel time, and lower collection cost compared to the apheresis technique.
Medicine, Issue 70, Immunology, Hematology, Infectious Disease, Pathology, pathogen inactivation, pathogen reduction, double-dose platelets, INTERCEPT Blood System, amotosalen, UVA, platelet, blood processing, buffy coat, IBS, transfusion
4414
Play Button
Isolation of Native Soil Microorganisms with Potential for Breaking Down Biodegradable Plastic Mulch Films Used in Agriculture
Authors: Graham Bailes, Margaret Lind, Andrew Ely, Marianne Powell, Jennifer Moore-Kucera, Carol Miles, Debra Inglis, Marion Brodhagen.
Institutions: Western Washington University, Washington State University Northwestern Research and Extension Center, Texas Tech University.
Fungi native to agricultural soils that colonized commercially available biodegradable mulch (BDM) films were isolated and assessed for potential to degrade plastics. Typically, when formulations of plastics are known and a source of the feedstock is available, powdered plastic can be suspended in agar-based media and degradation determined by visualization of clearing zones. However, this approach poorly mimics in situ degradation of BDMs. First, BDMs are not dispersed as small particles throughout the soil matrix. Secondly, BDMs are not sold commercially as pure polymers, but rather as films containing additives (e.g. fillers, plasticizers and dyes) that may affect microbial growth. The procedures described herein were used for isolates acquired from soil-buried mulch films. Fungal isolates acquired from excavated BDMs were tested individually for growth on pieces of new, disinfested BDMs laid atop defined medium containing no carbon source except agar. Isolates that grew on BDMs were further tested in liquid medium where BDMs were the sole added carbon source. After approximately ten weeks, fungal colonization and BDM degradation were assessed by scanning electron microscopy. Isolates were identified via analysis of ribosomal RNA gene sequences. This report describes methods for fungal isolation, but bacteria also were isolated using these methods by substituting media appropriate for bacteria. Our methodology should prove useful for studies investigating breakdown of intact plastic films or products for which plastic feedstocks are either unknown or not available. However our approach does not provide a quantitative method for comparing rates of BDM degradation.
Microbiology, Issue 75, Plant Biology, Environmental Sciences, Agricultural Sciences, Soil Science, Molecular Biology, Cellular Biology, Genetics, Mycology, Fungi, Bacteria, Microorganisms, Biodegradable plastic, biodegradable mulch, compostable plastic, compostable mulch, plastic degradation, composting, breakdown, soil, 18S ribosomal DNA, isolation, culture
50373
Play Button
Writing and Low-Temperature Characterization of Oxide Nanostructures
Authors: Akash Levy, Feng Bi, Mengchen Huang, Shicheng Lu, Michelle Tomczyk, Guanglei Cheng, Patrick Irvin, Jeremy Levy.
Institutions: University of Pittsburgh.
Oxide nanoelectronics is a rapidly growing field which seeks to develop novel materials with multifunctional behavior at nanoscale dimensions. Oxide interfaces exhibit a wide range of properties that can be controlled include conduction, piezoelectric behavior, ferromagnetism, superconductivity and nonlinear optical properties. Recently, methods for controlling these properties at extreme nanoscale dimensions have been discovered and developed. Here are described explicit step-by-step procedures for creating LaAlO3/SrTiO3 nanostructures using a reversible conductive atomic force microscopy technique. The processing steps for creating electrical contacts to the LaAlO3/SrTiO3 interface are first described. Conductive nanostructures are created by applying voltages to a conductive atomic force microscope tip and locally switching the LaAlO3/SrTiO3 interface to a conductive state. A versatile nanolithography toolkit has been developed expressly for the purpose of controlling the atomic force microscope (AFM) tip path and voltage. Then, these nanostructures are placed in a cryostat and transport measurements are performed. The procedures described here should be useful to others wishing to conduct research in oxide nanoelectronics.
Physics, Issue 89, oxide nanostructures; nanoelectronics; semiconductor; LaAlO3/SrTiO3; lithography; AFM nanolithography
51886
Play Button
Lensless Fluorescent Microscopy on a Chip
Authors: Ahmet F. Coskun, Ting-Wei Su, Ikbal Sencan, Aydogan Ozcan.
Institutions: University of California, Los Angeles .
On-chip lensless imaging in general aims to replace bulky lens-based optical microscopes with simpler and more compact designs, especially for high-throughput screening applications. This emerging technology platform has the potential to eliminate the need for bulky and/or costly optical components through the help of novel theories and digital reconstruction algorithms. Along the same lines, here we demonstrate an on-chip fluorescent microscopy modality that can achieve e.g., <4μm spatial resolution over an ultra-wide field-of-view (FOV) of >0.6-8 cm2 without the use of any lenses, mechanical-scanning or thin-film based interference filters. In this technique, fluorescent excitation is achieved through a prism or hemispherical-glass interface illuminated by an incoherent source. After interacting with the entire object volume, this excitation light is rejected by total-internal-reflection (TIR) process that is occurring at the bottom of the sample micro-fluidic chip. The fluorescent emission from the excited objects is then collected by a fiber-optic faceplate or a taper and is delivered to an optoelectronic sensor array such as a charge-coupled-device (CCD). By using a compressive-sampling based decoding algorithm, the acquired lensfree raw fluorescent images of the sample can be rapidly processed to yield e.g., <4μm resolution over an FOV of >0.6-8 cm2. Moreover, vertically stacked micro-channels that are separated by e.g., 50-100 μm can also be successfully imaged using the same lensfree on-chip microscopy platform, which further increases the overall throughput of this modality. This compact on-chip fluorescent imaging platform, with a rapid compressive decoder behind it, could be rather valuable for high-throughput cytometry, rare-cell research and microarray-analysis.
Bioengineering, Issue 54, Lensless Microscopy, Fluorescent On-chip Imaging, Wide-field Microscopy, On-Chip Cytometry, Compressive Sampling/Sensing
3181
Play Button
Electron Cryotomography of Bacterial Cells
Authors: Songye Chen, Alasdair McDowall, Megan J. Dobro, Ariane Briegel, Mark Ladinsky, Jian Shi, Elitza I. Tocheva, Morgan Beeby, Martin Pilhofer, H. Jane Ding, Zhuo Li, Lu Gan, Dylan M. Morris, Grant J. Jensen.
Institutions: California Institute of Technology - Caltech, California Institute of Technology - Caltech.
While much is already known about the basic metabolism of bacterial cells, many fundamental questions are still surprisingly unanswered, including for instance how they generate and maintain specific cell shapes, establish polarity, segregate their genomes, and divide. In order to understand these phenomena, imaging technologies are needed that bridge the resolution gap between fluorescence light microscopy and higher-resolution methods such as X-ray crystallography and NMR spectroscopy. Electron cryotomography (ECT) is an emerging technology that does just this, allowing the ultrastructure of cells to be visualized in a near-native state, in three dimensions (3D), with "macromolecular" resolution (~4nm).1, 2 In ECT, cells are imaged in a vitreous, "frozen-hydrated" state in a cryo transmission electron microscope (cryoTEM) at low temperature (< -180°C). For slender cells (up to ~500 nm in thickness3), intact cells are plunge-frozen within media across EM grids in cryogens such as ethane or ethane/propane mixtures. Thicker cells and biofilms can also be imaged in a vitreous state by first "high-pressure freezing" and then, "cryo-sectioning" them. A series of two-dimensional projection images are then collected through the sample as it is incrementally tilted along one or two axes. A three-dimensional reconstruction, or "tomogram" can then be calculated from the images. While ECT requires expensive instrumentation, in recent years, it has been used in a few labs to reveal the structures of various external appendages, the structures of different cell envelopes, the positions and structures of cytoskeletal filaments, and the locations and architectures of large macromolecular assemblies such as flagellar motors, internal compartments and chemoreceptor arrays.1, 2 In this video article we illustrate how to image cells with ECT, including the processes of sample preparation, data collection, tomogram reconstruction, and interpretation of the results through segmentation and in some cases correlation with light microscopy.
Cellular Biology, Issue 39, Electron cryotomography, microbiology, bacteria, electron microscopy
1943
Play Button
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Authors: Hans Maaswinkel, Liqun Zhu, Wei Weng.
Institutions: xyZfish.
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
50681
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Authors: Laura Ekstrand, Nikolaus Karpinsky, Yajun Wang, Song Zhang.
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
50421
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
X-ray Dose Reduction through Adaptive Exposure in Fluoroscopic Imaging
Authors: Steve Burion, Tobias Funk.
Institutions: Triple Ring Technologies.
X-ray fluoroscopy is widely used for image guidance during cardiac intervention. However, radiation dose in these procedures can be high, and this is a significant concern, particularly in pediatric applications. Pediatrics procedures are in general much more complex than those performed on adults and thus are on average four to eight times longer1. Furthermore, children can undergo up to 10 fluoroscopic procedures by the age of 10, and have been shown to have a three-fold higher risk of developing fatal cancer throughout their life than the general population2,3. We have shown that radiation dose can be significantly reduced in adult cardiac procedures by using our scanning beam digital x-ray (SBDX) system4-- a fluoroscopic imaging system that employs an inverse imaging geometry5,6 (Figure 1, Movie 1 and Figure 2). Instead of a single focal spot and an extended detector as used in conventional systems, our approach utilizes an extended X-ray source with multiple focal spots focused on a small detector. Our X-ray source consists of a scanning electron beam sequentially illuminating up to 9,000 focal spot positions. Each focal spot projects a small portion of the imaging volume onto the detector. In contrast to a conventional system where the final image is directly projected onto the detector, the SBDX uses a dedicated algorithm to reconstruct the final image from the 9,000 detector images. For pediatric applications, dose savings with the SBDX system are expected to be smaller than in adult procedures. However, the SBDX system allows for additional dose savings by implementing an electronic adaptive exposure technique. Key to this method is the multi-beam scanning technique of the SBDX system: rather than exposing every part of the image with the same radiation dose, we can dynamically vary the exposure depending on the opacity of the region exposed. Therefore, we can significantly reduce exposure in radiolucent areas and maintain exposure in more opaque regions. In our current implementation, the adaptive exposure requires user interaction (Figure 3). However, in the future, the adaptive exposure will be real time and fully automatic. We have performed experiments with an anthropomorphic phantom and compared measured radiation dose with and without adaptive exposure using a dose area product (DAP) meter. In the experiment presented here, we find a dose reduction of 30%.
Bioengineering, Issue 55, Scanning digital X-ray, fluoroscopy, pediatrics, interventional cardiology, adaptive exposure, dose savings
3236
Play Button
Improving IV Insulin Administration in a Community Hospital
Authors: Michael C. Magee.
Institutions: Wyoming Medical Center.
Diabetes mellitus is a major independent risk factor for increased morbidity and mortality in the hospitalized patient, and elevated blood glucose concentrations, even in non-diabetic patients, predicts poor outcomes.1-4 The 2008 consensus statement by the American Association of Clinical Endocrinologists (AACE) and the American Diabetes Association (ADA) states that "hyperglycemia in hospitalized patients, irrespective of its cause, is unequivocally associated with adverse outcomes."5 It is important to recognize that hyperglycemia occurs in patients with known or undiagnosed diabetes as well as during acute illness in those with previously normal glucose tolerance. The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study involved over six thousand adult intensive care unit (ICU) patients who were randomized to intensive glucose control or conventional glucose control.6 Surprisingly, this trial found that intensive glucose control increased the risk of mortality by 14% (odds ratio, 1.14; p=0.02). In addition, there was an increased prevalence of severe hypoglycemia in the intensive control group compared with the conventional control group (6.8% vs. 0.5%, respectively; p<0.001). From this pivotal trial and two others,7,8 Wyoming Medical Center (WMC) realized the importance of controlling hyperglycemia in the hospitalized patient while avoiding the negative impact of resultant hypoglycemia. Despite multiple revisions of an IV insulin paper protocol, analysis of data from usage of the paper protocol at WMC shows that in terms of achieving normoglycemia while minimizing hypoglycemia, results were suboptimal. Therefore, through a systematical implementation plan, monitoring of patient blood glucose levels was switched from using a paper IV insulin protocol to a computerized glucose management system. By comparing blood glucose levels using the paper protocol to that of the computerized system, it was determined, that overall, the computerized glucose management system resulted in more rapid and tighter glucose control than the traditional paper protocol. Specifically, a substantial increase in the time spent within the target blood glucose concentration range, as well as a decrease in the prevalence of severe hypoglycemia (BG < 40 mg/dL), clinical hypoglycemia (BG < 70 mg/dL), and hyperglycemia (BG > 180 mg/dL), was witnessed in the first five months after implementation of the computerized glucose management system. The computerized system achieved target concentrations in greater than 75% of all readings while minimizing the risk of hypoglycemia. The prevalence of hypoglycemia (BG < 70 mg/dL) with the use of the computer glucose management system was well under 1%.
Medicine, Issue 64, Physiology, Computerized glucose management, Endotool, hypoglycemia, hyperglycemia, diabetes, IV insulin, paper protocol, glucose control
3705
Play Button
Enabling High Grayscale Resolution Displays and Accurate Response Time Measurements on Conventional Computers
Authors: Xiangrui Li, Zhong-Lin Lu.
Institutions: The Ohio State University, University of Southern California, University of Southern California, University of Southern California, The Ohio State University.
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ 1 and DataPixx 2 use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher 3 described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network 4 and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements. Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events. Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at http://lobes.usc.edu/.
Neuroscience, Issue 60, VideoSwitcher, Visual stimulus, Luminance resolution, Contrast, Trigger, RTbox, Response time
3312
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.