JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
StochPy: A Comprehensive, User-Friendly Tool for Simulating Stochastic Biological Processes.
PLoS ONE
PUBLISHED: 01-01-2013
Single-cell and single-molecule measurements indicate the importance of stochastic phenomena in cell biology. Stochasticity creates spontaneous differences in the copy numbers of key macromolecules and the timing of reaction events between genetically-identical cells. Mathematical models are indispensable for the study of phenotypic stochasticity in cellular decision-making and cell survival. There is a demand for versatile, stochastic modeling environments with extensive, preprogrammed statistics functions and plotting capabilities that hide the mathematics from the novice users and offers low-level programming access to the experienced user. Here we present StochPy (Stochastic modeling in Python), which is a flexible software tool for stochastic simulation in cell biology. It provides various stochastic simulation algorithms, SBML support, analyses of the probability distributions of molecule copy numbers and event waiting times, analyses of stochastic time series, and a range of additional statistical functions and plotting facilities for stochastic simulations. We illustrate the functionality of StochPy with stochastic models of gene expression, cell division, and single-molecule enzyme kinetics. StochPy has been successfully tested against the SBML stochastic test suite, passing all tests. StochPy is a comprehensive software package for stochastic simulation of the molecular control networks of living cells. It allows novice and experienced users to study stochastic phenomena in cell biology. The integration with other Python software makes StochPy both a user-friendly and easily extendible simulation tool.
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Published: 11-03-2011
ABSTRACT
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
24 Related JoVE Articles!
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
50131
Play Button
Single Cell Transcriptional Profiling of Adult Mouse Cardiomyocytes
Authors: James M. Flynn, Luis F. Santana, Simon Melov.
Institutions: Buck Institute for Research on Aging, University of Washington.
While numerous studies have examined gene expression changes from homogenates of heart tissue, this prevents studying the inherent stochastic variation between cells within a tissue. Isolation of pure cardiomyocyte populations through a collagenase perfusion of mouse hearts facilitates the generation of single cell microarrays for whole transcriptome gene expression, or qPCR of specific targets using nanofluidic arrays. We describe here a procedure to examine single cell gene expression profiles of cardiomyocytes isolated from the heart. This paradigm allows for the evaluation of metrics of interest which are not reliant on the mean (for example variance between cells within a tissue) which is not possible when using conventional whole tissue workflows for the evaluation of gene expression (Figure 1). We have achieved robust amplification of the single cell transcriptome yielding micrograms of double stranded cDNA that facilitates the use of microarrays on individual cells. In the procedure we describe the use of NimbleGen arrays which were selected for their ease of use and ability to customize their design. Alternatively, a reverse transcriptase - specific target amplification (RT-STA) reaction, allows for qPCR of hundreds of targets by nanofluidic PCR. Using either of these approaches, it is possible to examine the variability of expression between cells, as well as examining expression profiles of rare cell types from within a tissue. Overall, the single cell gene expression approach allows for the generation of data that can potentially identify idiosyncratic expression profiles that are typically averaged out when examining expression of millions of cells from typical homogenates generated from whole tissues.
Molecular Biology, Issue 58, Single cell analysis, Microarray, Gene expression, Cardiomyocyte, Mouse heart perfusion, mice, qPCR
3302
Play Button
Single-molecule Imaging of Gene Regulation In vivo Using Cotranslational Activation by Cleavage (CoTrAC)
Authors: Zach Hensel, Xiaona Fang, Jie Xiao.
Institutions: Johns Hopkins University School of Medicine, Chinese Academy of Sciences , Jilin University.
We describe a fluorescence microscopy method, Co-Translational Activation by Cleavage (CoTrAC) to image the production of protein molecules in live cells with single-molecule precision without perturbing the protein's functionality. This method makes it possible to count the numbers of protein molecules produced in one cell during sequential, five-minute time windows. It requires a fluorescence microscope with laser excitation power density of ~0.5 to 1 kW/cm2, which is sufficiently sensitive to detect single fluorescent protein molecules in live cells. The fluorescent reporter used in this method consists of three parts: a membrane targeting sequence, a fast-maturing, yellow fluorescent protein and a protease recognition sequence. The reporter is translationally fused to the N-terminus of a protein of interest. Cells are grown on a temperature-controlled microscope stage. Every five minutes, fluorescent molecules within cells are imaged (and later counted by analyzing fluorescence images) and subsequently photobleached so that only newly translated proteins are counted in the next measurement. Fluorescence images resulting from this method can be analyzed by detecting fluorescent spots in each image, assigning them to individual cells and then assigning cells to cell lineages. The number of proteins produced within a time window in a given cell is calculated by dividing the integrated fluorescence intensity of spots by the average intensity of single fluorescent molecules. We used this method to measure expression levels in the range of 0-45 molecules in single 5 min time windows. This method enabled us to measure noise in the expression of the λ repressor CI, and has many other potential applications in systems biology.
Biophysics, Issue 73, Biochemistry, Genetics, Chemistry, Molecular Biology, Cellular Biology, Microbiology, Proteins, Single molecule, fluorescence protein, protein expression, cotranslational activation, CoTrAC, cell culture, fluorescent microscopy, imaging, translational activation, systems biology
50042
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
3358
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
50975
Play Button
Tissue-simulating Phantoms for Assessing Potential Near-infrared Fluorescence Imaging Applications in Breast Cancer Surgery
Authors: Rick Pleijhuis, Arwin Timmermans, Johannes De Jong, Esther De Boer, Vasilis Ntziachristos, Gooitzen Van Dam.
Institutions: University Medical Center Groningen, Technical University of Munich.
Inaccuracies in intraoperative tumor localization and evaluation of surgical margin status result in suboptimal outcome of breast-conserving surgery (BCS). Optical imaging, in particular near-infrared fluorescence (NIRF) imaging, might reduce the frequency of positive surgical margins following BCS by providing the surgeon with a tool for pre- and intraoperative tumor localization in real-time. In the current study, the potential of NIRF-guided BCS is evaluated using tissue-simulating breast phantoms for reasons of standardization and training purposes. Breast phantoms with optical characteristics comparable to those of normal breast tissue were used to simulate breast conserving surgery. Tumor-simulating inclusions containing the fluorescent dye indocyanine green (ICG) were incorporated in the phantoms at predefined locations and imaged for pre- and intraoperative tumor localization, real-time NIRF-guided tumor resection, NIRF-guided evaluation on the extent of surgery, and postoperative assessment of surgical margins. A customized NIRF camera was used as a clinical prototype for imaging purposes. Breast phantoms containing tumor-simulating inclusions offer a simple, inexpensive, and versatile tool to simulate and evaluate intraoperative tumor imaging. The gelatinous phantoms have elastic properties similar to human tissue and can be cut using conventional surgical instruments. Moreover, the phantoms contain hemoglobin and intralipid for mimicking absorption and scattering of photons, respectively, creating uniform optical properties similar to human breast tissue. The main drawback of NIRF imaging is the limited penetration depth of photons when propagating through tissue, which hinders (noninvasive) imaging of deep-seated tumors with epi-illumination strategies.
Medicine, Issue 91, Breast cancer, tissue-simulating phantoms, NIRF imaging, tumor-simulating inclusions, fluorescence, intraoperative imaging
51776
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Mapping Molecular Diffusion in the Plasma Membrane by Multiple-Target Tracing (MTT)
Authors: Vincent Rouger, Nicolas Bertaux, Tomasz Trombik, Sébastien Mailfert, Cyrille Billaudeau, Didier Marguet, Arnauld Sergé.
Institutions: Parc scientifique de Luminy, Parc scientifique de Luminy, Aix-Marseille University, Technopôle de Château-Gombert, Aix-Marseille University, Aix-Marseille University.
Our goal is to obtain a comprehensive description of molecular processes occurring at cellular membranes in different biological functions. We aim at characterizing the complex organization and dynamics of the plasma membrane at single-molecule level, by developing analytic tools dedicated to Single-Particle Tracking (SPT) at high density: Multiple-Target Tracing (MTT)1. Single-molecule videomicroscopy, offering millisecond and nanometric resolution1-11, allows a detailed representation of membrane organization12-14 by accurately mapping descriptors such as cell receptors localization, mobility, confinement or interactions. We revisited SPT, both experimentally and algorithmically. Experimental aspects included optimizing setup and cell labeling, with a particular emphasis on reaching the highest possible labeling density, in order to provide a dynamic snapshot of molecular dynamics as it occurs within the membrane. Algorithmic issues concerned each step used for rebuilding trajectories: peaks detection, estimation and reconnection, addressed by specific tools from image analysis15,16. Implementing deflation after detection allows rescuing peaks initially hidden by neighboring, stronger peaks. Of note, improving detection directly impacts reconnection, by reducing gaps within trajectories. Performances have been evaluated using Monte-Carlo simulations for various labeling density and noise values, which typically represent the two major limitations for parallel measurements at high spatiotemporal resolution. The nanometric accuracy17 obtained for single molecules, using either successive on/off photoswitching or non-linear optics, can deliver exhaustive observations. This is the basis of nanoscopy methods17 such as STORM18, PALM19,20, RESOLFT21 or STED22,23, which may often require imaging fixed samples. The central task is the detection and estimation of diffraction-limited peaks emanating from single-molecules. Hence, providing adequate assumptions such as handling a constant positional accuracy instead of Brownian motion, MTT is straightforwardly suited for nanoscopic analyses. Furthermore, MTT can fundamentally be used at any scale: not only for molecules, but also for cells or animals, for instance. Hence, MTT is a powerful tracking algorithm that finds applications at molecular and cellular scales.
Physics, Issue 63, Single-particle tracking, single-molecule fluorescence microscopy, image analysis, tracking algorithm, high-resolution diffusion map, plasma membrane lateral organization
3599
Play Button
Measuring the Kinetics of mRNA Transcription in Single Living Cells
Authors: Yehuda Brody, Yaron Shav-Tal.
Institutions: Bar-Ilan University.
The transcriptional activity of RNA polymerase II (Pol II) is a dynamic process and therefore measuring the kinetics of the transcriptional process in vivo is of importance. Pol II kinetics have been measured using biochemical or molecular methods.1-3 In recent years, with the development of new visualization methods, it has become possible to follow transcription as it occurs in real time in single living cells.4 Herein we describe how to perform analysis of Pol II elongation kinetics on a specific gene in living cells.5, 6 Using a cell line in which a specific gene locus (DNA), its mRNA product, and the final protein product can be fluorescently labeled and visualized in vivo, it is possible to detect the actual transcription of mRNAs on the gene of interest.7, 8 The mRNA is fluorescently tagged using the MS2 system for tagging mRNAs in vivo, where the 3'UTR of the mRNA transcripts contain 24 MS2 stem-loop repeats, which provide highly specific binding sites for the YFP-MS2 coat protein that labels the mRNA as it is transcribed.9 To monitor the kinetics of transcription we use the Fluorescence Recovery After Photobleaching (FRAP) method. By photobleaching the YFP-MS2-tagged nascent transcripts at the site of transcription and then following the recovery of this signal over time, we obtain the synthesis rate of the newly made mRNAs.5 In other words, YFP-MS2 fluorescence recovery reflects the generation of new MS2 stem-loops in the nascent transcripts and their binding by fluorescent free YFP-MS2 molecules entering from the surrounding nucleoplasm. The FRAP recovery curves are then analyzed using mathematical mechanistic models formalized by a series of differential equations, in order to retrieve the kinetic time parameters of transcription.
Cell Biology, Issue 54, mRNA transcription, nucleus, live-cell imaging, cellular dynamics, FRAP
2898
Play Button
Monitoring the Wall Mechanics During Stent Deployment in a Vessel
Authors: Brian D. Steinert, Shijia Zhao, Linxia Gu.
Institutions: University of Nebraska-Lincoln.
Clinical trials have reported different restenosis rates for various stent designs1. It is speculated that stent-induced strain concentrations on the arterial wall lead to tissue injury, which initiates restenosis2-7. This hypothesis needs further investigations including better quantifications of non-uniform strain distribution on the artery following stent implantation. A non-contact surface strain measurement method for the stented artery is presented in this work. ARAMIS stereo optical surface strain measurement system uses two optical high speed cameras to capture the motion of each reference point, and resolve three dimensional strains over the deforming surface8,9. As a mesh stent is deployed into a latex vessel with a random contrasting pattern sprayed or drawn on its outer surface, the surface strain is recorded at every instant of the deformation. The calculated strain distributions can then be used to understand the local lesion response, validate the computational models, and formulate hypotheses for further in vivo study.
Biomedical Engineering, Issue 63, Stent, vessel, interaction, strain distribution, stereo optical surface strain measurement system, bioengineering
3945
Play Button
Designing a Bio-responsive Robot from DNA Origami
Authors: Eldad Ben-Ishay, Almogit Abu-Horowitz, Ido Bachelet.
Institutions: Bar-Ilan University.
Nucleic acids are astonishingly versatile. In addition to their natural role as storage medium for biological information1, they can be utilized in parallel computing2,3 , recognize and bind molecular or cellular targets4,5 , catalyze chemical reactions6,7 , and generate calculated responses in a biological system8,9. Importantly, nucleic acids can be programmed to self-assemble into 2D and 3D structures10-12, enabling the integration of all these remarkable features in a single robot linking the sensing of biological cues to a preset response in order to exert a desired effect. Creating shapes from nucleic acids was first proposed by Seeman13, and several variations on this theme have since been realized using various techniques11,12,14,15 . However, the most significant is perhaps the one proposed by Rothemund, termed scaffolded DNA origami16. In this technique, the folding of a long (>7,000 bases) single-stranded DNA 'scaffold' is directed to a desired shape by hundreds of short complementary strands termed 'staples'. Folding is carried out by temperature annealing ramp. This technique was successfully demonstrated in the creation of a diverse array of 2D shapes with remarkable precision and robustness. DNA origami was later extended to 3D as well17,18 . The current paper will focus on the caDNAno 2.0 software19 developed by Douglas and colleagues. caDNAno is a robust, user-friendly CAD tool enabling the design of 2D and 3D DNA origami shapes with versatile features. The design process relies on a systematic and accurate abstraction scheme for DNA structures, making it relatively straightforward and efficient. In this paper we demonstrate the design of a DNA origami nanorobot that has been recently described20. This robot is 'robotic' in the sense that it links sensing to actuation, in order to perform a task. We explain how various sensing schemes can be integrated into the structure, and how this can be relayed to a desired effect. Finally we use Cando21 to simulate the mechanical properties of the designed shape. The concept we discuss can be adapted to multiple tasks and settings.
Bioengineering, Issue 77, Genetics, Biomedical Engineering, Molecular Biology, Medicine, Genomics, Nanotechnology, Nanomedicine, DNA origami, nanorobot, caDNAno, DNA, DNA Origami, nucleic acids, DNA structures, CAD, sequencing
50268
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice
Authors: Robert S. McNeill, Ralf S. Schmid, Ryan E. Bash, Mark Vitucci, Kristen K. White, Andrea M. Werneke, Brian H. Constance, Byron Huff, C. Ryan Miller.
Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine.
Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro. Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo. Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro and in vivo and may be useful in preclinical drug development for these devastating diseases.
Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft
51763
Play Button
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Authors: Carmine Di Rienzo, Enrico Gratton, Fabio Beltram, Francesco Cardarelli.
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
51994
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Super-resolution Imaging of the Cytokinetic Z Ring in Live Bacteria Using Fast 3D-Structured Illumination Microscopy (f3D-SIM)
Authors: Lynne Turnbull, Michael P. Strauss, Andrew T. F. Liew, Leigh G. Monahan, Cynthia B. Whitchurch, Elizabeth J. Harry.
Institutions: University of Technology, Sydney.
Imaging of biological samples using fluorescence microscopy has advanced substantially with new technologies to overcome the resolution barrier of the diffraction of light allowing super-resolution of live samples. There are currently three main types of super-resolution techniques – stimulated emission depletion (STED), single-molecule localization microscopy (including techniques such as PALM, STORM, and GDSIM), and structured illumination microscopy (SIM). While STED and single-molecule localization techniques show the largest increases in resolution, they have been slower to offer increased speeds of image acquisition. Three-dimensional SIM (3D-SIM) is a wide-field fluorescence microscopy technique that offers a number of advantages over both single-molecule localization and STED. Resolution is improved, with typical lateral and axial resolutions of 110 and 280 nm, respectively and depth of sampling of up to 30 µm from the coverslip, allowing for imaging of whole cells. Recent advancements (fast 3D-SIM) in the technology increasing the capture rate of raw images allows for fast capture of biological processes occurring in seconds, while significantly reducing photo-toxicity and photobleaching. Here we describe the use of one such method to image bacterial cells harboring the fluorescently-labelled cytokinetic FtsZ protein to show how cells are analyzed and the type of unique information that this technique can provide.
Molecular Biology, Issue 91, super-resolution microscopy, fluorescence microscopy, OMX, 3D-SIM, Blaze, cell division, bacteria, Bacillus subtilis, Staphylococcus aureus, FtsZ, Z ring constriction
51469
Play Button
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Authors: Christopher J. Zopf, Narendra Maheshri.
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
50456
Play Button
Visualizing Single Molecular Complexes In Vivo Using Advanced Fluorescence Microscopy
Authors: Ian M. Dobbie, Alexander Robson, Nicolas Delalez, Mark C. Leake.
Institutions: University of Oxford, University of Oxford.
Full insight into the mechanisms of living cells can be achieved only by investigating the key processes that elicit and direct events at a cellular level. To date the shear complexity of biological systems has caused precise single-molecule experimentation to be far too demanding, instead focusing on studies of single systems using relatively crude bulk ensemble-average measurements. However, many important processes occur in the living cell at the level of just one or a few molecules; ensemble measurements generally mask the stochastic and heterogeneous nature of these events. Here, using advanced optical microscopy and analytical image analysis tools we demonstrate how to monitor proteins within a single living bacterial cell to a precision of single molecules and how we can observe dynamics within molecular complexes in functioning biological machines. The techniques are directly relevant physiologically. They are minimally-perturbative and non-invasive to the biological sample under study and are fully attuned for investigations in living material, features not readily available to other single-molecule approaches of biophysics. In addition, the biological specimens studied all produce fluorescently-tagged protein at levels which are almost identical to the unmodified cell strains ("genomic encoding"), as opposed to the more common but less ideal approach for generating significantly more protein than would occur naturally ('plasmid expression'). Thus, the actual biological samples which will be investigated are significantly closer to the natural organisms, and therefore the observations more relevant to real physiological processes.
Bioengineering, Issue 31, Single-molecule, fluorescence, microscopy, TIRF, FRAP, in vivo, membrane protein, GFP, diffusion, bacteria
1508
Play Button
Designing and Implementing Nervous System Simulations on LEGO Robots
Authors: Daniel Blustein, Nikolai Rosenthal, Joseph Ayers.
Institutions: Northeastern University, Bremen University of Applied Sciences.
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.1 The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum.
Neuroscience, Issue 75, Neurobiology, Bioengineering, Behavior, Mechanical Engineering, Computer Science, Marine Biology, Biomimetics, Marine Science, Neurosciences, Synthetic Biology, Robotics, robots, Modeling, models, Sensory Fusion, nervous system, Educational Tools, programming, software, lobster, Homarus americanus, animal model
50519
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
4009
Play Button
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Authors: John Marshall, Koji Morikawa, Nicholas Manoukis, Charles Taylor.
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
227
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.