JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
openPDS: protecting the privacy of metadata through SafeAnswers.
PUBLISHED: 01-01-2014
The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.
Authors: Benjamin Pavie, Satwik Rajaram, Austin Ouyang, Jason M. Altschuler, Robert J. Steininger III, Lani F. Wu, Steven J. Altschuler.
Published: 03-19-2014
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
21 Related JoVE Articles!
Play Button
Using Flatbed Scanners to Collect High-resolution Time-lapsed Images of the Arabidopsis Root Gravitropic Response
Authors: Halie C Smith, Devon J Niewohner, Grant D Dewey, Autumn M Longo, Tracy L Guy, Bradley R Higgins, Sarah B Daehling, Sarah C. Genrich, Christopher D Wentworth, Tessa L Durham Brooks.
Institutions: Doane College, Doane College.
Research efforts in biology increasingly require use of methodologies that enable high-volume collection of high-resolution data. A challenge laboratories can face is the development and attainment of these methods. Observation of phenotypes in a process of interest is a typical objective of research labs studying gene function and this is often achieved through image capture. A particular process that is amenable to observation using imaging approaches is the corrective growth of a seedling root that has been displaced from alignment with the gravity vector. Imaging platforms used to measure the root gravitropic response can be expensive, relatively low in throughput, and/or labor intensive. These issues have been addressed by developing a high-throughput image capture method using inexpensive, yet high-resolution, flatbed scanners. Using this method, images can be captured every few minutes at 4,800 dpi. The current setup enables collection of 216 individual responses per day. The image data collected is of ample quality for image analysis applications.
Basic Protocol, Issue 83, root gravitropism, Arabidopsis, high-throughput phenotyping, flatbed scanners, image analysis, undergraduate research
Play Button
A Quantitative Fitness Analysis Workflow
Authors: A.P. Banks, C. Lawless, D.A. Lydall.
Institutions: Newcastle University Medical School.
Quantitative Fitness Analysis (QFA) is an experimental and computational workflow for comparing fitnesses of microbial cultures grown in parallel1,2,3,4. QFA can be applied to focused observations of single cultures but is most useful for genome-wide genetic interaction or drug screens investigating up to thousands of independent cultures. The central experimental method is the inoculation of independent, dilute liquid microbial cultures onto solid agar plates which are incubated and regularly photographed. Photographs from each time-point are analyzed, producing quantitative cell density estimates, which are used to construct growth curves, allowing quantitative fitness measures to be derived. Culture fitnesses can be compared to quantify and rank genetic interaction strengths or drug sensitivities. The effect on culture fitness of any treatments added into substrate agar (e.g. small molecules, antibiotics or nutrients) or applied to plates externally (e.g. UV irradiation, temperature) can be quantified by QFA. The QFA workflow produces growth rate estimates analogous to those obtained by spectrophotometric measurement of parallel liquid cultures in 96-well or 200-well plate readers. Importantly, QFA has significantly higher throughput compared with such methods. QFA cultures grow on a solid agar surface and are therefore well aerated during growth without the need for stirring or shaking. QFA throughput is not as high as that of some Synthetic Genetic Array (SGA) screening methods5,6. However, since QFA cultures are heavily diluted before being inoculated onto agar, QFA can capture more complete growth curves, including exponential and saturation phases3. For example, growth curve observations allow culture doubling times to be estimated directly with high precision, as discussed previously1. Here we present a specific QFA protocol applied to thousands of S. cerevisiae cultures which are automatically handled by robots during inoculation, incubation and imaging. Any of these automated steps can be replaced by an equivalent, manual procedure, with an associated reduction in throughput, and we also present a lower throughput manual protocol. The same QFA software tools can be applied to images captured in either workflow. We have extensive experience applying QFA to cultures of the budding yeast S. cerevisiae but we expect that QFA will prove equally useful for examining cultures of the fission yeast S. pombe and bacterial cultures.
Physiology, Issue 66, Medicine, Robotic, microbial, culture, yeast, array, library, high-throughput, analysis, fitness, growth rate, quantitative, solid agar
Play Button
Microvascular Decompression: Salient Surgical Principles and Technical Nuances
Authors: Jonathan Forbes, Calvin Cooper, Walter Jermakowicz, Joseph Neimat, Peter Konrad.
Institutions: Vanderbilt University Medical Center, Vanderbilt University Medical Center.
Trigeminal neuralgia is a disorder associated with severe episodes of lancinating pain in the distribution of the trigeminal nerve. Previous reports indicate that 80-90% of cases are related to compression of the trigeminal nerve by an adjacent vessel. The majority of patients with trigeminal neuralgia eventually require surgical management in order to achieve remission of symptoms. Surgical options for management include ablative procedures (e.g., radiosurgery, percutaneous radiofrequency lesioning, balloon compression, glycerol rhizolysis, etc.) and microvascular decompression. Ablative procedures fail to address the root cause of the disorder and are less effective at preventing recurrence of symptoms over the long term than microvascular decompression. However, microvascular decompression is inherently more invasive than ablative procedures and is associated with increased surgical risks. Previous studies have demonstrated a correlation between surgeon experience and patient outcome in microvascular decompression. In this series of 59 patients operated on by two neurosurgeons (JSN and PEK) since 2006, 93% of patients demonstrated substantial improvement in their trigeminal neuralgia following the procedure—with follow-up ranging from 6 weeks to 2 years. Moreover, 41 of 66 patients (approximately 64%) have been entirely pain-free following the operation. In this publication, video format is utilized to review the microsurgical pathology of this disorder. Steps of the operative procedure are reviewed and salient principles and technical nuances useful in minimizing complications and maximizing efficacy are discussed.
Medicine, Issue 53, microvascular, decompression, trigeminal, neuralgia, operation, video
Play Button
Generation of Comprehensive Thoracic Oncology Database - Tool for Translational Research
Authors: Mosmi Surati, Matthew Robinson, Suvobroto Nandi, Leonardo Faoro, Carley Demchuk, Rajani Kanteti, Benjamin Ferguson, Tara Gangadhar, Thomas Hensing, Rifat Hasina, Aliya Husain, Mark Ferguson, Theodore Karrison, Ravi Salgia.
Institutions: University of Chicago, University of Chicago, Northshore University Health Systems, University of Chicago, University of Chicago, University of Chicago.
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
Medicine, Issue 47, Database, Thoracic oncology, Bioinformatics, Biorepository, Microsoft Access, Proteomics, Genomics
Play Button
The Trier Social Stress Test Protocol for Inducing Psychological Stress
Authors: Melissa A. Birkett.
Institutions: Northern Arizona University.
This article demonstrates a psychological stress protocol for use in a laboratory setting. Protocols that allow researchers to study the biological pathways of the stress response in health and disease are fundamental to the progress of research in stress and anxiety.1 Although numerous protocols exist for inducing stress response in the laboratory, many neglect to provide a naturalistic context or to incorporate aspects of social and psychological stress. Of psychological stress protocols, meta-analysis suggests that the Trier Social Stress Test (TSST) is the most useful and appropriate standardized protocol for studies of stress hormone reactivity.2 In the original description of the TSST, researchers sought to design and evaluate a procedure capable of inducing a reliable stress response in the majority of healthy volunteers.3 These researchers found elevations in heart rate, blood pressure and several endocrine stress markers in response to the TSST (a psychological stressor) compared to a saline injection (a physical stressor).3 Although the TSST has been modified to meet the needs of various research groups, it generally consists of a waiting period upon arrival, anticipatory speech preparation, speech performance, and verbal arithmetic performance periods, followed by one or more recovery periods. The TSST requires participants to prepare and deliver a speech, and verbally respond to a challenging arithmetic problem in the presence of a socially evaluative audience.3 Social evaluation and uncontrollability have been identified as key components of stress induction by the TSST.4 In use for over a decade, the goal of the TSST is to systematically induce a stress response in order to measure differences in reactivity, anxiety and activation of the hypothalamic-pituitary-adrenal (HPA) or sympathetic-adrenal-medullary (SAM) axis during the task.1 Researchers generally assess changes in self-reported anxiety, physiological measures (e.g. heart rate), and/or neuroendocrine indices (e.g. the stress hormone cortisol) in response to the TSST. Many investigators have adopted salivary sampling for stress markers such as cortisol and alpha-amylase (a marker of autonomic nervous system activation) as an alternative to blood sampling to reduce the confounding stress of blood-collection techniques. In addition to changes experienced by an individual completing the TSST, researchers can compare changes between different treatment groups (e.g. clinical versus healthy control samples) or the effectiveness of stress-reducing interventions.1
Medicine, Issue 56, Stress, anxiety, laboratory stressor, cortisol, physiological response, psychological stressor
Play Button
Microwave-assisted Functionalization of Poly(ethylene glycol) and On-resin Peptides for Use in Chain Polymerizations and Hydrogel Formation
Authors: Amy H. Van Hove, Brandon D. Wilson, Danielle S. W. Benoit.
Institutions: University of Rochester, University of Rochester, University of Rochester Medical Center.
One of the main benefits to using poly(ethylene glycol) (PEG) macromers in hydrogel formation is synthetic versatility. The ability to draw from a large variety of PEG molecular weights and configurations (arm number, arm length, and branching pattern) affords researchers tight control over resulting hydrogel structures and properties, including Young’s modulus and mesh size. This video will illustrate a rapid, efficient, solvent-free, microwave-assisted method to methacrylate PEG precursors into poly(ethylene glycol) dimethacrylate (PEGDM). This synthetic method provides much-needed starting materials for applications in drug delivery and regenerative medicine. The demonstrated method is superior to traditional methacrylation methods as it is significantly faster and simpler, as well as more economical and environmentally friendly, using smaller amounts of reagents and solvents. We will also demonstrate an adaptation of this technique for on-resin methacrylamide functionalization of peptides. This on-resin method allows the N-terminus of peptides to be functionalized with methacrylamide groups prior to deprotection and cleavage from resin. This allows for selective addition of methacrylamide groups to the N-termini of the peptides while amino acids with reactive side groups (e.g. primary amine of lysine, primary alcohol of serine, secondary alcohols of threonine, and phenol of tyrosine) remain protected, preventing functionalization at multiple sites. This article will detail common analytical methods (proton Nuclear Magnetic Resonance spectroscopy (;H-NMR) and Matrix Assisted Laser Desorption Ionization Time of Flight mass spectrometry (MALDI-ToF)) to assess the efficiency of the functionalizations. Common pitfalls and suggested troubleshooting methods will be addressed, as will modifications of the technique which can be used to further tune macromer functionality and resulting hydrogel physical and chemical properties. Use of synthesized products for the formation of hydrogels for drug delivery and cell-material interaction studies will be demonstrated, with particular attention paid to modifying hydrogel composition to affect mesh size, controlling hydrogel stiffness and drug release.
Chemistry, Issue 80, Poly(ethylene glycol), peptides, polymerization, polymers, methacrylation, peptide functionalization, 1H-NMR, MALDI-ToF, hydrogels, macromer synthesis
Play Button
Performing Behavioral Tasks in Subjects with Intracranial Electrodes
Authors: Matthew A. Johnson, Susan Thompson, Jorge Gonzalez-Martinez, Hyun-Joo Park, Juan Bulacio, Imad Najm, Kevin Kahn, Matthew Kerr, Sridevi V. Sarma, John T. Gale.
Institutions: Cleveland Clinic Foundation, Cleveland Clinic Foundation, Cleveland Clinic Foundation, Johns Hopkins University.
Patients having stereo-electroencephalography (SEEG) electrode, subdural grid or depth electrode implants have a multitude of electrodes implanted in different areas of their brain for the localization of their seizure focus and eloquent areas. After implantation, the patient must remain in the hospital until the pathological area of brain is found and possibly resected. During this time, these patients offer a unique opportunity to the research community because any number of behavioral paradigms can be performed to uncover the neural correlates that guide behavior. Here we present a method for recording brain activity from intracranial implants as subjects perform a behavioral task designed to assess decision-making and reward encoding. All electrophysiological data from the intracranial electrodes are recorded during the behavioral task, allowing for the examination of the many brain areas involved in a single function at time scales relevant to behavior. Moreover, and unlike animal studies, human patients can learn a wide variety of behavioral tasks quickly, allowing for the ability to perform more than one task in the same subject or for performing controls. Despite the many advantages of this technique for understanding human brain function, there are also methodological limitations that we discuss, including environmental factors, analgesic effects, time constraints and recordings from diseased tissue. This method may be easily implemented by any institution that performs intracranial assessments; providing the opportunity to directly examine human brain function during behavior.
Behavior, Issue 92, Cognitive neuroscience, Epilepsy, Stereo-electroencephalography, Subdural grids, Behavioral method, Electrophysiology
Play Button
Solid-phase Submonomer Synthesis of Peptoid Polymers and their Self-Assembly into Highly-Ordered Nanosheets
Authors: Helen Tran, Sarah L. Gael, Michael D. Connolly, Ronald N. Zuckermann.
Institutions: Lawrence Berkeley National Laboratory.
Peptoids are a novel class of biomimetic, non-natural, sequence-specific heteropolymers that resist proteolysis, exhibit potent biological activity, and fold into higher order nanostructures. Structurally similar to peptides, peptoids are poly N-substituted glycines, where the side chains are attached to the nitrogen rather than the alpha-carbon. Their ease of synthesis and structural diversity allows testing of basic design principles to drive de novo design and engineering of new biologically-active and nanostructured materials. Here, a simple manual peptoid synthesis protocol is presented that allows the synthesis of long chain polypeptoids ( up to 50mers) in excellent yields. Only basic equipment, simple techniques (e.g. liquid transfer, filtration), and commercially available reagents are required, making peptoids an accessible addition to many researchers' toolkits. The peptoid backbone is grown one monomer at a time via the submonomer method which consists of a two-step monomer addition cycle: acylation and displacement. First, bromoacetic acid activated in situ with N,N'-diisopropylcarbodiimide acylates a resin-bound secondary amine. Second, nucleophilic displacement of the bromide by a primary amine follows to introduce the side chain. The two-step cycle is iterated until the desired chain length is reached. The coupling efficiency of this two-step cycle routinely exceeds 98% and enables the synthesis of peptoids as long as 50 residues. Highly tunable, precise and chemically diverse sequences are achievable with the submonomer method as hundreds of readily available primary amines can be directly incorporated. Peptoids are emerging as a versatile biomimetic material for nanobioscience research because of their synthetic flexibility, robustness, and ordering at the atomic level. The folding of a single-chain, amphiphilic, information-rich polypeptoid into a highly-ordered nanosheet was recently demonstrated. This peptoid is a 36-mer that consists of only three different commercially available monomers: hydrophobic, cationic and anionic. The hydrophobic phenylethyl side chains are buried in the nanosheet core whereas the ionic amine and carboxyl side chains align on the hydrophilic faces. The peptoid nanosheets serve as a potential platform for membrane mimetics, protein mimetics, device fabrication, and sensors. Methods for peptoid synthesis, sheet formation, and microscopy imaging are described and provide a simple method to enable future peptoid nanosheet designs.
Bioengineering, Issue 57, Biomimetic polymer, peptoid, nanosheet, solid-phase synthesis, self-assembly, bilayer
Play Button
High-throughput Synthesis of Carbohydrates and Functionalization of Polyanhydride Nanoparticles
Authors: Brenda R. Carrillo-Conde, Rajarshi Roychoudhury, Ana V. Chavez-Santoscoy, Balaji Narasimhan, Nicola L.B. Pohl.
Institutions: Iowa State University, Iowa State University.
Transdisciplinary approaches involving areas such as material design, nanotechnology, chemistry, and immunology have to be utilized to rationally design efficacious vaccines carriers. Nanoparticle-based platforms can prolong the persistence of vaccine antigens, which could improve vaccine immunogenicity1. Several biodegradable polymers have been studied as vaccine delivery vehicles1; in particular, polyanhydride particles have demonstrated the ability to provide sustained release of stable protein antigens and to activate antigen presenting cells and modulate immune responses2-12. The molecular design of these vaccine carriers needs to integrate the rational selection of polymer properties as well as the incorporation of appropriate targeting agents. High throughput automated fabrication of targeting ligands and functionalized particles is a powerful tool that will enhance the ability to study a wide range of properties and will lead to the design of reproducible vaccine delivery devices. The addition of targeting ligands capable of being recognized by specific receptors on immune cells has been shown to modulate and tailor immune responses10,11,13 C-type lectin receptors (CLRs) are pattern recognition receptors (PRRs) that recognize carbohydrates present on the surface of pathogens. The stimulation of immune cells via CLRs allows for enhanced internalization of antigen and subsequent presentation for further T cell activation14,15. Therefore, carbohydrate molecules play an important role in the study of immune responses; however, the use of these biomolecules often suffers from the lack of availability of structurally well-defined and pure carbohydrates. An automation platform based on iterative solution-phase reactions can enable rapid and controlled synthesis of these synthetically challenging molecules using significantly lower building block quantities than traditional solid-phase methods16,17. Herein we report a protocol for the automated solution-phase synthesis of oligosaccharides such as mannose-based targeting ligands with fluorous solid-phase extraction for intermediate purification. After development of automated methods to make the carbohydrate-based targeting agent, we describe methods for their attachment on the surface of polyanhydride nanoparticles employing an automated robotic set up operated by LabVIEW as previously described10. Surface functionalization with carbohydrates has shown efficacy in targeting CLRs10,11 and increasing the throughput of the fabrication method to unearth the complexities associated with a multi-parametric system will be of great value (Figure 1a).
Bioengineering, Issue 65, Chemical Engineering, High-throughput, Automation, Carbohydrates, Synthesis, Polyanhydrides, Nanoparticles, Functionalization, Targeting, Fluorous Solid Phase Extraction
Play Button
Sample Drift Correction Following 4D Confocal Time-lapse Imaging
Authors: Adam Parslow, Albert Cardona, Robert J. Bryson-Richardson.
Institutions: Monash University, Howard Hughes Medical Institute.
The generation of four-dimensional (4D) confocal datasets; consisting of 3D image sequences over time; provides an excellent methodology to capture cellular behaviors involved in developmental processes.  The ability to track and follow cell movements is limited by sample movements that occur due to drift of the sample or, in some cases, growth during image acquisition. Tracking cells in datasets affected by drift and/or growth will incorporate these movements into any analysis of cell position. This may result in the apparent movement of static structures within the sample. Therefore prior to cell tracking, any sample drift should be corrected. Using the open source Fiji distribution 1  of ImageJ 2,3 and the incorporated LOCI tools 4, we developed the Correct 3D drift plug-in to remove erroneous sample movement in confocal datasets. This protocol effectively compensates for sample translation or alterations in focal position by utilizing phase correlation to register each time-point of a four-dimensional confocal datasets while maintaining the ability to visualize and measure cell movements over extended time-lapse experiments.
Bioengineering, Issue 86, Image Processing, Computer-Assisted, Zebrafish, Microscopy, Confocal, Time-Lapse Imaging, imaging, zebrafish, Confocal, fiji, three-dimensional, four-dimensional, registration
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Visualizing Neuroblast Cytokinesis During C. elegans Embryogenesis
Authors: Denise Wernike, Chloe van Oostende, Alisa Piekny.
Institutions: Concordia University.
This protocol describes the use of fluorescence microscopy to image dividing cells within developing Caenorhabditis elegans embryos. In particular, this protocol focuses on how to image dividing neuroblasts, which are found underneath the epidermal cells and may be important for epidermal morphogenesis. Tissue formation is crucial for metazoan development and relies on external cues from neighboring tissues. C. elegans is an excellent model organism to study tissue morphogenesis in vivo due to its transparency and simple organization, making its tissues easy to study via microscopy. Ventral enclosure is the process where the ventral surface of the embryo is covered by a single layer of epithelial cells. This event is thought to be facilitated by the underlying neuroblasts, which provide chemical guidance cues to mediate migration of the overlying epithelial cells. However, the neuroblasts are highly proliferative and also may act as a mechanical substrate for the ventral epidermal cells. Studies using this experimental protocol could uncover the importance of intercellular communication during tissue formation, and could be used to reveal the roles of genes involved in cell division within developing tissues.
Neuroscience, Issue 85, C. elegans, morphogenesis, cytokinesis, neuroblasts, anillin, microscopy, cell division
Play Button
Imaging C. elegans Embryos using an Epifluorescent Microscope and Open Source Software
Authors: Koen J. C. Verbrugghe, Raymond C. Chan.
Institutions: University of Michigan.
Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples1,2. Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage3, thus providing an ideal experiment model for studying questions in cell biology4,5and development6-9. C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis10,11) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis12-15). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters16,17. These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo18-21. In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process.
Basic Protocols, Issue 49, Cellular Biology, Caenorhabditis elegans, microscopy, development
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Authors: Justen Manasa, Siva Danaviah, Sureshnee Pillay, Prevashinee Padayachee, Hloniphile Mthiyane, Charity Mkhize, Richard John Lessells, Christopher Seebregts, Tobias F. Rinke de Wit, Johannes Viljoen, David Katzenstein, Tulio De Oliveira.
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Play Button
Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults
Authors: Madeleine E. Hackney, Kathleen McKee.
Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Reduced-gravity Environment Hardware Demonstrations of a Prototype Miniaturized Flow Cytometer and Companion Microfluidic Mixing Technology
Authors: William S. Phipps, Zhizhong Yin, Candice Bae, Julia Z. Sharpe, Andrew M. Bishara, Emily S. Nelson, Aaron S. Weaver, Daniel Brown, Terri L. McKay, DeVon Griffin, Eugene Y. Chan.
Institutions: DNA Medicine Institute, Harvard Medical School, NASA Glenn Research Center, ZIN Technologies.
Until recently, astronaut blood samples were collected in-flight, transported to earth on the Space Shuttle, and analyzed in terrestrial laboratories. If humans are to travel beyond low Earth orbit, a transition towards space-ready, point-of-care (POC) testing is required. Such testing needs to be comprehensive, easy to perform in a reduced-gravity environment, and unaffected by the stresses of launch and spaceflight. Countless POC devices have been developed to mimic laboratory scale counterparts, but most have narrow applications and few have demonstrable use in an in-flight, reduced-gravity environment. In fact, demonstrations of biomedical diagnostics in reduced gravity are limited altogether, making component choice and certain logistical challenges difficult to approach when seeking to test new technology. To help fill the void, we are presenting a modular method for the construction and operation of a prototype blood diagnostic device and its associated parabolic flight test rig that meet the standards for flight-testing onboard a parabolic flight, reduced-gravity aircraft. The method first focuses on rig assembly for in-flight, reduced-gravity testing of a flow cytometer and a companion microfluidic mixing chip. Components are adaptable to other designs and some custom components, such as a microvolume sample loader and the micromixer may be of particular interest. The method then shifts focus to flight preparation, by offering guidelines and suggestions to prepare for a successful flight test with regard to user training, development of a standard operating procedure (SOP), and other issues. Finally, in-flight experimental procedures specific to our demonstrations are described.
Cellular Biology, Issue 93, Point-of-care, prototype, diagnostics, spaceflight, reduced gravity, parabolic flight, flow cytometry, fluorescence, cell counting, micromixing, spiral-vortex, blood mixing
Play Button
Synthesis and Calibration of Phosphorescent Nanoprobes for Oxygen Imaging in Biological Systems
Authors: Louise E. Sinks, Emmanuel Roussakis, Tatiana V. Esipova, Sergei A. Vinogradov.
Institutions: University of Pennsylvania .
Oxygen measurement by phosphorescence quenching [1, 2] consists of the following steps: 1) the probe is delivered into the medium of interest (e.g. blood or interstitial fluid); 2) the object is illuminated with light of appropriate wavelength in order to excite the probe into its triplet state; 3) the emitted phosphorescence is collected, and its time course is analyzed to yield the phosphorescence lifetime, which is converted into the oxygen concentration (or partial pressure, pO2). The probe must not interact with the biological environment and in some cases to be 4) excreted from the medium upon the measurement completion. Each of these steps imposes requirements on the molecular design of the phosphorescent probes, which constitute the only invasive component of the measurement protocol. Here we review the design of dendritic phosphorescent nanosensors for oxygen measurements in biological systems. The probes consist of Pt or Pd porphyrin-based polyarylglycine (AG) dendrimers, modified peripherally with polyethylene glycol (PEG's) residues. For effective two-photon excitation, termini of the dendrimers may be modified with two-photon antenna chromophores, which capture the excitation energy and channel it to the triplet cores of the probes via intramolecular FRET (Förster Resonance Energy Transfer). We describe the key photophysical properties of the probes and present detailed calibration protocols.
Cellular Biology, Issue 37, oxygen, phosphorescence, porphyrin, dendrimer, imaging, nanosensor, two-photon
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Play Button
Reaggregate Thymus Cultures
Authors: Andrea White, Eric Jenkinson, Graham Anderson.
Institutions: University of Birmingham .
Stromal cells within lymphoid tissues are organized into three-dimensional structures that provide a scaffold that is thought to control the migration and development of haemopoeitic cells. Importantly, the maintenance of this three-dimensional organization appears to be critical for normal stromal cell function, with two-dimensional monolayer cultures often being shown to be capable of supporting only individual fragments of lymphoid tissue function. In the thymus, complex networks of cortical and medullary epithelial cells act as a framework that controls the recruitment, proliferation, differentiation and survival of lymphoid progenitors as they undergo the multi-stage process of intrathymic T-cell development. Understanding the functional role of individual stromal compartments in the thymus is essential in determining how the thymus imposes self/non-self discrimination. Here we describe a technique in which we exploit the plasticity of fetal tissues to re-associate into intact three-dimensional structures in vitro, following their enzymatic disaggregation. The dissociation of fetal thymus lobes into heterogeneous cellular mixtures, followed by their separation into individual cellular components, is then combined with the in vitro re-association of these desired cell types into three-dimensional reaggregate structures at defined ratios, thereby providing an opportunity to investigate particular aspects of T-cell development under defined cellular conditions. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196).
Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.