JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Extending birthday paradox theory to estimate the number of tags in RFID systems.
PUBLISHED: 01-01-2014
The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes.
We present two methods for observing bumblebee choice behavior in an enclosed testing space. The first method consists of Radio Frequency Identification (RFID) readers built into artificial flowers that display various visual cues, and RFID tags (i.e., passive transponders) glued to the thorax of bumblebee workers. The novelty in our implementation is that RFID readers are built directly into artificial flowers that are capable of displaying several distinct visual properties such as color, pattern type, spatial frequency (i.e., “busyness” of the pattern), and symmetry (spatial frequency and symmetry were not manipulated in this experiment). Additionally, these visual displays in conjunction with the automated systems are capable of recording unrewarded and untrained choice behavior. The second method consists of recording choice behavior at artificial flowers using motion-sensitive high-definition camcorders. Bumblebees have number tags glued to their thoraces for unique identification. The advantage in this implementation over RFID is that in addition to observing landing behavior, alternate measures of preference such as hovering and antennation may also be observed. Both automation methods increase experimental control, and internal validity by allowing larger scale studies that take into account individual differences. External validity is also improved because bees can freely enter and exit the testing environment without constraints such as the availability of a research assistant on-site. Compared to human observation in real time, the automated methods are more cost-effective and possibly less error-prone.
26 Related JoVE Articles!
Play Button
Laboratory Drop Towers for the Experimental Simulation of Dust-aggregate Collisions in the Early Solar System
Authors: Jürgen Blum, Eike Beitz, Mohtashim Bukhari, Bastian Gundlach, Jan-Hendrik Hagemann, Daniel Heißelmann, Stefan Kothe, Rainer Schräpler, Ingo von Borstel, René Weidling.
Institutions: Technische Universität Braunschweig.
For the purpose of investigating the evolution of dust aggregates in the early Solar System, we developed two vacuum drop towers in which fragile dust aggregates with sizes up to ~10 cm and porosities up to 70% can be collided. One of the drop towers is primarily used for very low impact speeds down to below 0.01 m/sec and makes use of a double release mechanism. Collisions are recorded in stereo-view by two high-speed cameras, which fall along the glass vacuum tube in the center-of-mass frame of the two dust aggregates. The other free-fall tower makes use of an electromagnetic accelerator that is capable of gently accelerating dust aggregates to up to 5 m/sec. In combination with the release of another dust aggregate to free fall, collision speeds up to ~10 m/sec can be achieved. Here, two fixed high-speed cameras record the collision events. In both drop towers, the dust aggregates are in free fall during the collision so that they are weightless and match the conditions in the early Solar System.
Physics, Issue 88, astrophysics, planet formation, collisions, granular matter, high-speed imaging, microgravity drop tower
Play Button
ReAsH/FlAsH Labeling and Image Analysis of Tetracysteine Sensor Proteins in Cells
Authors: Sevgi Irtegun, Yasmin M. Ramdzan, Terrence D. Mulhern, Danny M. Hatters.
Institutions: Bio21 Molecular Science and Biotechnology Institute.
Fluorescent proteins and dyes are essential tools for the study of protein trafficking, localization and function in cells. While fluorescent proteins such as green fluorescence protein (GFP) have been extensively used as fusion partners to proteins to track the properties of a protein of interest1, recent developments with smaller tags enable new functionalities of proteins to be examined in cells such as conformational change and protein-association 2, 3. One small tag system involves a tetracysteine motif (CCXXCC) genetically inserted into a target protein, which binds to biarsenical dyes, ReAsH (red fluorescent) and FlAsH (green fluorescent), with high specificity even in live cells 2. The TC/biarsenical dye system offers far less steric constraints to the host protein than fluorescent proteins which has enabled several new approaches to measure conformational change and protein-protein interactions 4-7. We recently developed a novel application of TC tags as sensors of oligomerization in cells expressing mutant huntingtin, which when mutated aggregates in neurons in Huntington disease 7. Huntingtin was tagged with two fluorescent dyes, one a fluorescent protein to track protein location, and the second a TC tag which only binds biarsenical dyes in monomers. Hence, changes in colocalization between protein and biarsenical dye reactivity enabled submicroscopic oligomer content to be spatially mapped within cells. Here, we describe how to label TC-tagged proteins fused to a fluorescent protein (Cherry, GFP or CFP) with FlAsH or ReAsH in live mammalian cells and how to quantify the two color fluorescence (Cherry/FlAsH, CFP/FlAsH or GFP/ReAsH combinations).
Cell Biology, Issue 54, tetracysteine, TC, ReAsH, FlAsH, biarsenical dyes, fluorescence, imaging, confocal microscopy, ImageJ, GFP
Play Button
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Authors: Fijoy Vadakkumpadan, Hermenegild Arevalo, Natalia A. Trayanova.
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via semiautomatic segmentation, from an in vivo computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Measurement of Lifespan in Drosophila melanogaster
Authors: Nancy J. Linford, Ceyda Bilgir, Jennifer Ro, Scott D. Pletcher.
Institutions: University of Michigan , University of Michigan .
Aging is a phenomenon that results in steady physiological deterioration in nearly all organisms in which it has been examined, leading to reduced physical performance and increased risk of disease. Individual aging is manifest at the population level as an increase in age-dependent mortality, which is often measured in the laboratory by observing lifespan in large cohorts of age-matched individuals. Experiments that seek to quantify the extent to which genetic or environmental manipulations impact lifespan in simple model organisms have been remarkably successful for understanding the aspects of aging that are conserved across taxa and for inspiring new strategies for extending lifespan and preventing age-associated disease in mammals. The vinegar fly, Drosophila melanogaster, is an attractive model organism for studying the mechanisms of aging due to its relatively short lifespan, convenient husbandry, and facile genetics. However, demographic measures of aging, including age-specific survival and mortality, are extraordinarily susceptible to even minor variations in experimental design and environment, and the maintenance of strict laboratory practices for the duration of aging experiments is required. These considerations, together with the need to practice careful control of genetic background, are essential for generating robust measurements. Indeed, there are many notable controversies surrounding inference from longevity experiments in yeast, worms, flies and mice that have been traced to environmental or genetic artifacts1-4. In this protocol, we describe a set of procedures that have been optimized over many years of measuring longevity in Drosophila using laboratory vials. We also describe the use of the dLife software, which was developed by our laboratory and is available for download ( dLife accelerates throughput and promotes good practices by incorporating optimal experimental design, simplifying fly handling and data collection, and standardizing data analysis. We will also discuss the many potential pitfalls in the design, collection, and interpretation of lifespan data, and we provide steps to avoid these dangers.
Developmental Biology, Issue 71, Cellular Biology, Molecular Biology, Anatomy, Physiology, Entomology, longevity, lifespan, aging, Drosophila melanogaster, fruit fly, Drosophila, mortality, animal model
Play Button
Massively Parallel Reporter Assays in Cultured Mammalian Cells
Authors: Alexandre Melnikov, Xiaolan Zhang, Peter Rogov, Li Wang, Tarjei S. Mikkelsen.
Institutions: Broad Institute.
The genetic reporter assay is a well-established and powerful tool for dissecting the relationship between DNA sequences and their gene regulatory activities. The potential throughput of this assay has, however, been limited by the need to individually clone and assay the activity of each sequence on interest using protein fluorescence or enzymatic activity as a proxy for regulatory activity. Advances in high-throughput DNA synthesis and sequencing technologies have recently made it possible to overcome these limitations by multiplexing the construction and interrogation of large libraries of reporter constructs. This protocol describes implementation of a Massively Parallel Reporter Assay (MPRA) that allows direct comparison of hundreds of thousands of putative regulatory sequences in a single cell culture dish.
Genetics, Issue 90, gene regulation, transcriptional regulation, sequence-activity mapping, reporter assay, library cloning, transfection, tag sequencing, mammalian cells
Play Button
In vivo Quantification of G Protein Coupled Receptor Interactions using Spectrally Resolved Two-photon Microscopy
Authors: Michael Stoneman, Deo Singh, Valerica Raicu.
Institutions: University of Wisconsin - Milwaukee, University of Wisconsin - Milwaukee.
The study of protein interactions in living cells is an important area of research because the information accumulated both benefits industrial applications as well as increases basic fundamental biological knowledge. Förster (Fluorescence) Resonance Energy Transfer (FRET) between a donor molecule in an electronically excited state and a nearby acceptor molecule has been frequently utilized for studies of protein-protein interactions in living cells. The proteins of interest are tagged with two different types of fluorescent probes and expressed in biological cells. The fluorescent probes are then excited, typically using laser light, and the spectral properties of the fluorescence emission emanating from the fluorescent probes is collected and analyzed. Information regarding the degree of the protein interactions is embedded in the spectral emission data. Typically, the cell must be scanned a number of times in order to accumulate enough spectral information to accurately quantify the extent of the protein interactions for each region of interest within the cell. However, the molecular composition of these regions may change during the course of the acquisition process, limiting the spatial determination of the quantitative values of the apparent FRET efficiencies to an average over entire cells. By means of a spectrally resolved two-photon microscope, we are able to obtain a full set of spectrally resolved images after only one complete excitation scan of the sample of interest. From this pixel-level spectral data, a map of FRET efficiencies throughout the cell is calculated. By applying a simple theory of FRET in oligomeric complexes to the experimentally obtained distribution of FRET efficiencies throughout the cell, a single spectrally resolved scan reveals stoichiometric and structural information about the oligomer complex under study. Here we describe the procedure of preparing biological cells (the yeast Saccharomyces cerevisiae) expressing membrane receptors (sterile 2 α-factor receptors) tagged with two different types of fluorescent probes. Furthermore, we illustrate critical factors involved in collecting fluorescence data using the spectrally resolved two-photon microscopy imaging system. The use of this protocol may be extended to study any type of protein which can be expressed in a living cell with a fluorescent marker attached to it.
Cellular Biology, Issue 47, Forster (Fluorescence) Resonance Energy Transfer (FRET), protein-protein interactions, protein complex, in vivo determinations, spectral resolution, two-photon microscopy, G protein-coupled receptor (GPCR), sterile 2 alpha-factor protein (Ste2p)
Play Button
Isolation of Human Atrial Myocytes for Simultaneous Measurements of Ca2+ Transients and Membrane Currents
Authors: Niels Voigt, Xiao-Bo Zhou, Dobromir Dobrev.
Institutions: University of Duisburg-Essen , University of Heidelberg .
The study of electrophysiological properties of cardiac ion channels with the patch-clamp technique and the exploration of cardiac cellular Ca2+ handling abnormalities requires isolated cardiomyocytes. In addition, the possibility to investigate myocytes from patients using these techniques is an invaluable requirement to elucidate the molecular basis of cardiac diseases such as atrial fibrillation (AF).1 Here we describe a method for isolation of human atrial myocytes which are suitable for both patch-clamp studies and simultaneous measurements of intracellular Ca2+ concentrations. First, right atrial appendages obtained from patients undergoing open heart surgery are chopped into small tissue chunks ("chunk method") and washed in Ca2+-free solution. Then the tissue chunks are digested in collagenase and protease containing solutions with 20 μM Ca2+. Thereafter, the isolated myocytes are harvested by filtration and centrifugation of the tissue suspension. Finally, the Ca2+ concentration in the cell storage solution is adjusted stepwise to 0.2 mM. We briefly discuss the meaning of Ca2+ and Ca2+ buffering during the isolation process and also provide representative recordings of action potentials and membrane currents, both together with simultaneous Ca2+ transient measurements, performed in these isolated myocytes.
Cellular Biology, Issue 77, Medicine, Molecular Biology, Physiology, Anatomy, Cardiology, Pharmacology, human atrial myocytes, cell isolation, collagenase, calcium transient, calcium current, patch-clamp, ion currents, isolation, cell culture, myocytes, cardiomyocytes, electrophysiology, patch clamp
Play Button
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Authors: Carmine Di Rienzo, Enrico Gratton, Fabio Beltram, Francesco Cardarelli.
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Orthogonal Protein Purification Facilitated by a Small Bispecific Affinity Tag
Authors: Johan Nilvebrant, Tove Alm, Sophia Hober.
Institutions: Royal Institute of Technology.
Due to the high costs associated with purification of recombinant proteins the protocols need to be rationalized. For high-throughput efforts there is a demand for general methods that do not require target protein specific optimization1 . To achieve this, purification tags that genetically can be fused to the gene of interest are commonly used2 . The most widely used affinity handle is the hexa-histidine tag, which is suitable for purification under both native and denaturing conditions3 . The metabolic burden for producing the tag is low, but it does not provide as high specificity as competing affinity chromatography based strategies1,2. Here, a bispecific purification tag with two different binding sites on a 46 amino acid, small protein domain has been developed. The albumin-binding domain is derived from Streptococcal protein G and has a strong inherent affinity to human serum albumin (HSA). Eleven surface-exposed amino acids, not involved in albumin-binding4 , were genetically randomized to produce a combinatorial library. The protein library with the novel randomly arranged binding surface (Figure 1) was expressed on phage particles to facilitate selection of binders by phage display technology. Through several rounds of biopanning against a dimeric Z-domain derived from Staphylococcal protein A5, a small, bispecific molecule with affinity for both HSA and the novel target was identified6 . The novel protein domain, referred to as ABDz1, was evaluated as a purification tag for a selection of target proteins with different molecular weight, solubility and isoelectric point. Three target proteins were expressed in Escherishia coli with the novel tag fused to their N-termini and thereafter affinity purified. Initial purification on either a column with immobilized HSA or Z-domain resulted in relatively pure products. Two-step affinity purification with the bispecific tag resulted in substantial improvement of protein purity. Chromatographic media with the Z-domain immobilized, for example MabSelect SuRe, are readily available for purification of antibodies and HSA can easily be chemically coupled to media to provide the second matrix. This method is especially advantageous when there is a high demand on purity of the recovered target protein. The bifunctionality of the tag allows two different chromatographic steps to be used while the metabolic burden on the expression host is limited due to the small size of the tag. It provides a competitive alternative to so called combinatorial tagging where multiple tags are used in combination1,7.
Molecular Biology, Issue 59, Affinity chromatography, albumin-binding domain, human serum albumin, Z-domain
Play Button
GST-His purification: A Two-step Affinity Purification Protocol Yielding Full-length Purified Proteins
Authors: Ranjan Maity, Joris Pauty, Jana Krietsch, Rémi Buisson, Marie-Michelle Genois, Jean-Yves Masson.
Institutions: Hôtel-Dieu de Québec.
Key assays in enzymology for the biochemical characterization of proteins in vitro necessitate high concentrations of the purified protein of interest. Protein purification protocols should combine efficiency, simplicity and cost effectiveness1. Here, we describe the GST-His method as a new small-scale affinity purification system for recombinant proteins, based on a N-terminal Glutathione Sepharose Tag (GST)2,3 and a C-terminal 10xHis tag4, which are both fused to the protein of interest. The latter construct is used to generate baculoviruses, for infection of Sf9 infected cells for protein expression5. GST is a rather long tag (29 kDa) which serves to ensure purification efficiency. However, it might influence physiological properties of the protein. Hence, it is subsequently cleaved off the protein using the PreScission enzyme6. In order to ensure maximum purity and to remove the cleaved GST, we added a second affinity purification step based on the comparatively small His-Tag. Importantly, our technique is based on two different tags flanking the two ends of the protein, which is an efficient tool to remove degraded proteins and, therefore, enriches full-length proteins. The method presented here does not require an expensive instrumental setup, such as FPLC. Additionally, we incorporated MgCl2 and ATP washes to remove heat shock protein impurities and nuclease treatment to abolish contaminating nucleic acids. In summary, the combination of two different tags flanking the N- and the C-terminal and the capability to cleave off one of the tags, guaranties the recovery of a highly purified and full-length protein of interest.
Biochemistry, Issue 80, Genetics, Molecular Biology, Proteins, Proteomics, recombinant protein, affinity purification, Glutathione Sepharose Tag, Talon metal affinity resin
Play Button
Direct Restart of a Replication Fork Stalled by a Head-On RNA Polymerase
Authors: Richard T. Pomerantz, Mike O'Donnell.
Institutions: Rockefeller University.
In vivo studies suggest that replication forks are arrested due to encounters with head-on transcription complexes. Yet, the fate of the replisome and RNA polymerase (RNAP) following a head-on collision is unknown. Here, we find that the E. coli replisome stalls upon collision with a head-on transcription complex, but instead of collapsing, the replication fork remains highly stable and eventually resumes elongation after displacing the RNAP from DNA. We also find that the transcription-repair coupling factor, Mfd, promotes direct restart of the fork following the collision by facilitating displacement of the RNAP. These findings demonstrate the intrinsic stability of the replication apparatus and a novel role for the transcription-coupled repair pathway in promoting replication past a RNAP block.
Cellular Biology, Issue 38, replication, transcription, transcription-coupled repair, replisome, RNA polymerase, collision
Play Button
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Authors: Hans Maaswinkel, Liqun Zhu, Wei Weng.
Institutions: xyZfish.
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Visualizing Clathrin-mediated Endocytosis of G Protein-coupled Receptors at Single-event Resolution via TIRF Microscopy
Authors: Amanda L. Soohoo, Shanna L. Bowersox, Manojkumar A. Puthenveedu.
Institutions: Carnegie Mellon University.
Many important signaling receptors are internalized through the well-studied process of clathrin-mediated endocytosis (CME). Traditional cell biological assays, measuring global changes in endocytosis, have identified over 30 known components participating in CME, and biochemical studies have generated an interaction map of many of these components. It is becoming increasingly clear, however, that CME is a highly dynamic process whose regulation is complex and delicate. In this manuscript, we describe the use of Total Internal Reflection Fluorescence (TIRF) microscopy to directly visualize the dynamics of components of the clathrin-mediated endocytic machinery, in real time in living cells, at the level of individual events that mediate this process. This approach is essential to elucidate the subtle changes that can alter endocytosis without globally blocking it, as is seen with physiological regulation. We will focus on using this technique to analyze an area of emerging interest, the role of cargo composition in modulating the dynamics of distinct clathrin-coated pits (CCPs). This protocol is compatible with a variety of widely available fluorescence probes, and may be applied to visualizing the dynamics of many cargo molecules that are internalized from the cell surface.
Cellular Biology, Issue 92, Endocytosis, TIRF, total internal reflection fluorescence microscopy, clathrin, arrestin, receptors, live-cell microscopy, clathrin-mediated endocytosis
Play Button
In Situ SIMS and IR Spectroscopy of Well-defined Surfaces Prepared by Soft Landing of Mass-selected Ions
Authors: Grant E. Johnson, K. Don Dasitha Gunaratne, Julia Laskin.
Institutions: Pacific Northwest National Laboratory.
Soft landing of mass-selected ions onto surfaces is a powerful approach for the highly-controlled preparation of materials that are inaccessible using conventional synthesis techniques. Coupling soft landing with in situ characterization using secondary ion mass spectrometry (SIMS) and infrared reflection absorption spectroscopy (IRRAS) enables analysis of well-defined surfaces under clean vacuum conditions. The capabilities of three soft-landing instruments constructed in our laboratory are illustrated for the representative system of surface-bound organometallics prepared by soft landing of mass-selected ruthenium tris(bipyridine) dications, [Ru(bpy)3]2+ (bpy = bipyridine), onto carboxylic acid terminated self-assembled monolayer surfaces on gold (COOH-SAMs). In situ time-of-flight (TOF)-SIMS provides insight into the reactivity of the soft-landed ions. In addition, the kinetics of charge reduction, neutralization and desorption occurring on the COOH-SAM both during and after ion soft landing are studied using in situ Fourier transform ion cyclotron resonance (FT-ICR)-SIMS measurements. In situ IRRAS experiments provide insight into how the structure of organic ligands surrounding metal centers is perturbed through immobilization of organometallic ions on COOH-SAM surfaces by soft landing. Collectively, the three instruments provide complementary information about the chemical composition, reactivity and structure of well-defined species supported on surfaces.
Chemistry, Issue 88, soft landing, mass selected ions, electrospray, secondary ion mass spectrometry, infrared spectroscopy, organometallic, catalysis
Play Button
Profiling Thiol Redox Proteome Using Isotope Tagging Mass Spectrometry
Authors: Jennifer Parker, Ning Zhu, Mengmeng Zhu, Sixue Chen.
Institutions: University of Florida , University of Florida , University of Florida , University of Florida .
Pseudomonas syringae pv. tomato strain DC3000 not only causes bacterial speck disease in Solanum lycopersicum but also on Brassica species, as well as on Arabidopsis thaliana, a genetically tractable host plant1,2. The accumulation of reactive oxygen species (ROS) in cotyledons inoculated with DC3000 indicates a role of ROS in modulating necrotic cell death during bacterial speck disease of tomato3. Hydrogen peroxide, a component of ROS, is produced after inoculation of tomato plants with Pseudomonas3. Hydrogen peroxide can be detected using a histochemical stain 3'-3' diaminobenzidine (DAB)4. DAB staining reacts with hydrogen peroxide to produce a brown stain on the leaf tissue4. ROS has a regulatory role of the cellular redox environment, which can change the redox status of certain proteins5. Cysteine is an important amino acid sensitive to redox changes. Under mild oxidation, reversible oxidation of cysteine sulfhydryl groups serves as redox sensors and signal transducers that regulate a variety of physiological processes6,7. Tandem mass tag (TMT) reagents enable concurrent identification and multiplexed quantitation of proteins in different samples using tandem mass spectrometry8,9. The cysteine-reactive TMT (cysTMT) reagents enable selective labeling and relative quantitation of cysteine-containing peptides from up to six biological samples. Each isobaric cysTMT tag has the same nominal parent mass and is composed of a sulfhydryl-reactive group, a MS-neutral spacer arm and an MS/MS reporter10. After labeling, the samples were subject to protease digestion. The cysteine-labeled peptides were enriched using a resin containing anti-TMT antibody. During MS/MS analysis, a series of reporter ions (i.e., 126-131 Da) emerge in the low mass region, providing information on relative quantitation. The workflow is effective for reducing sample complexity, improving dynamic range and studying cysteine modifications. Here we present redox proteomic analysis of the Pst DC3000 treated tomato (Rio Grande) leaves using cysTMT technology. This high-throughput method has the potential to be applied to studying other redox-regulated physiological processes.
Genetics, Issue 61, Pseudomonas syringae pv. tomato (Pst), redox proteome, cysteine-reactive tandem mass tag (cysTMT), LTQ-Orbitrap mass spectrometry
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.