JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.
PUBLISHED: 06-11-2015
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.
Authors: Cristi King, Tiffany Scott-Horton.
Published: 01-08-2008
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
25 Related JoVE Articles!
Play Button
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Authors: James J. Jun, André Longtin, Leonard Maler.
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
Play Button
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Authors: Bianca DeBenedictis, J. Bruce Morton.
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Authors: Mikhail Kislin, Ekaterina Mugantseva, Dmitry Molotkov, Natalia Kulesskaya, Stanislav Khirug, Ilya Kirilkin, Evgeny Pryazhnikov, Julia Kolikova, Dmytro Toptunov, Mikhail Yuryev, Rashid Giniatullin, Vootele Voikar, Claudio Rivera, Heikki Rauvala, Leonard Khiroug.
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g., learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Play Button
Mechanical Expansion of Steel Tubing as a Solution to Leaky Wellbores
Authors: Mileva Radonjic, Darko Kupresan.
Institutions: Louisiana State University.
Wellbore cement, a procedural component of wellbore completion operations, primarily provides zonal isolation and mechanical support of the metal pipe (casing), and protects metal components from corrosive fluids. These are essential for uncompromised wellbore integrity. Cements can undergo multiple forms of failure, such as debonding at the cement/rock and cement/metal interfaces, fracturing, and defects within the cement matrix. Failures and defects within the cement will ultimately lead to fluid migration, resulting in inter-zonal fluid migration and premature well abandonment. Currently, there are over 1.8 million operating wells worldwide and over one third of these wells have leak related problems defined as Sustained Casing Pressure (SCP)1. The focus of this research was to develop an experimental setup at bench-scale to explore the effect of mechanical manipulation of wellbore casing-cement composite samples as a potential technology for the remediation of gas leaks. The experimental methodology utilized in this study enabled formation of an impermeable seal at the pipe/cement interface in a simulated wellbore system. Successful nitrogen gas flow-through measurements demonstrated that an existing microannulus was sealed at laboratory experimental conditions and fluid flow prevented by mechanical manipulation of the metal/cement composite sample. Furthermore, this methodology can be applied not only for the remediation of leaky wellbores, but also in plugging and abandonment procedures as well as wellbore completions technology, and potentially preventing negative impacts of wellbores on subsurface and surface environments.
Physics, Issue 93, Leaky wellbores, Wellbore cement, Microannular gas flow, Sustained casing pressure, Expandable casing technology.
Play Button
Closed-loop Neuro-robotic Experiments to Test Computational Properties of Neuronal Networks
Authors: Jacopo Tessadori, Michela Chiappalone.
Institutions: Istituto Italiano di Tecnologia.
Information coding in the Central Nervous System (CNS) remains unexplored. There is mounting evidence that, even at a very low level, the representation of a given stimulus might be dependent on context and history. If this is actually the case, bi-directional interactions between the brain (or if need be a reduced model of it) and sensory-motor system can shed a light on how encoding and decoding of information is performed. Here an experimental system is introduced and described in which the activity of a neuronal element (i.e., a network of neurons extracted from embryonic mammalian hippocampi) is given context and used to control the movement of an artificial agent, while environmental information is fed back to the culture as a sequence of electrical stimuli. This architecture allows a quick selection of diverse encoding, decoding, and learning algorithms to test different hypotheses on the computational properties of neuronal networks.
Neuroscience, Issue 97, Micro Electrode Arrays (MEA), in vitro cultures, coding, decoding, tetanic stimulation, spike, burst
Play Button
HPLC Measurement of the DNA Oxidation Biomarker, 8-oxo-7,8-dihydro-2’-deoxyguanosine, in Cultured Cells and Animal Tissues
Authors: Nikolai L. Chepelev, Dean A. Kennedy, Remi Gagné, Taryn White, Alexandra S. Long, Carole L. Yauk, Paul A. White.
Institutions: Health Canada.
Oxidative stress is associated with many physiological and pathological processes, as well as xenobiotic metabolism, leading to the oxidation of biomacromolecules, including DNA. Therefore, efficient detection of DNA oxidation is important for a variety of research disciplines, including medicine and toxicology. A common biomarker of oxidatively damaged DNA is 8-oxo-7,8-dihydro-2'-deoxyguanosine (8-oxo-dGuo; often erroneously referred to as 8-hydroxy-2'-deoxyguanosine (8-OH-dGuo or 8-oxo-dG)). Several protocols for 8-oxo-dGuo measurement by high pressure liquid chromatography with electrochemical detection (HPLC-ED) have been described. However, these were mainly applied to purified DNA treated with pro-oxidants. In addition, due to methodological differences between laboratories, mainly due to differences in analytical equipment, the adoption of published methods for detection of 8-oxo-dGuo by HPLC-ED requires careful optimization by each laboratory. A comprehensive protocol, describing such an optimization process, is lacking. Here, a detailed protocol is described for the detection of 8-oxo-dGuo by HPLC-ED, in DNA from cultured cells or animal tissues. It illustrates how DNA sample preparation can be easily and rapidly optimized to minimize undesirable DNA oxidation that can occur during sample preparation. This protocol shows how to detect 8-oxo-dGuo in cultured human alveolar adenocarcinoma cells (i.e., A549 cells) treated with the oxidizing agent KBrO3, and from the spleen of mice exposed to the polycyclic aromatic hydrocarbon dibenzo(def,p)chrysene (DBC, formerly known as dibenzo(a,l)pyrene, DalP). Overall, this work illustrates how an HPLC-ED methodology can be readily optimized for the detection of 8-oxo-dGuo in biological samples.
Chemistry, Issue 102, Oxidative Stress, DNA Damage, 8-oxo-7,8-dihydro-2'-deoxyguanosine, 8-hydroxy-2'-deoxyguanosine, Xenobiotic Metabolism, Human Health
Play Button
Removal of Trace Elements by Cupric Oxide Nanoparticles from Uranium In Situ Recovery Bleed Water and Its Effect on Cell Viability
Authors: Jodi R. Schilz, K. J. Reddy, Sreejayan Nair, Thomas E. Johnson, Ronald B. Tjalkens, Kem P. Krueger, Suzanne Clark.
Institutions: University of New Mexico, University of Wyoming, University of Wyoming, Colorado State University, Colorado State University, California Northstate University.
In situ recovery (ISR) is the predominant method of uranium extraction in the United States. During ISR, uranium is leached from an ore body and extracted through ion exchange. The resultant production bleed water (PBW) contains contaminants such as arsenic and other heavy metals. Samples of PBW from an active ISR uranium facility were treated with cupric oxide nanoparticles (CuO-NPs). CuO-NP treatment of PBW reduced priority contaminants, including arsenic, selenium, uranium, and vanadium. Untreated and CuO-NP treated PBW was used as the liquid component of the cell growth media and changes in viability were determined by the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay in human embryonic kidney (HEK 293) and human hepatocellular carcinoma (Hep G2) cells. CuO-NP treatment was associated with improved HEK and HEP cell viability. Limitations of this method include dilution of the PBW by growth media components and during osmolality adjustment as well as necessary pH adjustment. This method is limited in its wider context due to dilution effects and changes in the pH of the PBW which is traditionally slightly acidic however; this method could have a broader use assessing CuO-NP treatment in more neutral waters.
Environmental Sciences, Issue 100, Energy production, uranium in situ recovery, water decontamination, nanoparticles, toxicity, cytotoxicity, in vitro cell culture
Play Button
Quantifying Learning in Young Infants: Tracking Leg Actions During a Discovery-learning Task
Authors: Barbara Sargent, Hendrik Reimann, Masayoshi Kubo, Linda Fetters.
Institutions: University of Southern California, Temple University, Niigata University of Health and Welfare.
Task-specific actions emerge from spontaneous movement during infancy. It has been proposed that task-specific actions emerge through a discovery-learning process. Here a method is described in which 3-4 month old infants learn a task by discovery and their leg movements are captured to quantify the learning process. This discovery-learning task uses an infant activated mobile that rotates and plays music based on specified leg action of infants. Supine infants activate the mobile by moving their feet vertically across a virtual threshold. This paradigm is unique in that as infants independently discover that their leg actions activate the mobile, the infants’ leg movements are tracked using a motion capture system allowing for the quantification of the learning process. Specifically, learning is quantified in terms of the duration of mobile activation, the position variance of the end effectors (feet) that activate the mobile, changes in hip-knee coordination patterns, and changes in hip and knee muscle torque. This information describes infant exploration and exploitation at the interplay of person and environmental constraints that support task-specific action. Subsequent research using this method can investigate how specific impairments of different populations of infants at risk for movement disorders influence the discovery-learning process for task-specific action.
Behavior, Issue 100, infant, discovery-learning, motor learning, motor control, kinematics, kinetics
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth
Authors: Sabrina Lin, Shawn Fonteno, Shruthi Satish, Bir Bhanu, Prue Talbot.
Institutions: University of California, University of California, University of California, University of California.
Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.
Cellular Biology, Issue 39, hESC, matrigel, stem cells, video bioinformatics, colony, growth
Play Button
Patterned Photostimulation with Digital Micromirror Devices to Investigate Dendritic Integration Across Branch Points
Authors: Conrad W. Liang, Michael Mohammadi, M. Daniel Santos, Cha-Min Tang.
Institutions: University of Maryland School of Medicine.
Light is a versatile and precise means to control neuronal excitability. The recent introduction of light sensitive effectors such as channel-rhodopsin and caged neurotransmitters have led to interests in developing better means to control patterns of light in space and time that are useful for experimental neuroscience. One conventional strategy, employed in confocal and 2-photon microscopy, is to focus light to a diffraction limited spot and then scan that single spot sequentially over the region of interest. This approach becomes problematic if large areas have to be stimulated within a brief time window, a problem more applicable to photostimulation than for imaging. An alternate strategy is to project the complete spatial pattern on the target with the aid of a digital micromirror device (DMD). The DMD approach is appealing because the hardware components are relatively inexpensive and is supported by commercial interests. Because such a system is not available for upright microscopes, we will discuss the critical issues in the construction and operations of such a DMD system. Even though we will be primarily describing the construction of the system for UV photolysis, the modifications for building the much simpler visible light system for optogenetic experiments will also be provided. The UV photolysis system was used to carryout experiments to study a fundamental question in neuroscience, how are spatially distributed inputs integrated across distal dendritic branch points. The results suggest that integration can be non-linear across branch points and the supralinearity is largely mediated by NMDA receptors.
Bioengineering, Issue 49, DMD, photolysis, dendrite, photostimulation, DLP, optogenetics
Play Button
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Authors: David C. Shih, Kevin C. Ho, Kyle M. Melnick, Ronald A. Rensink, Tobias R. Kollmann, Edgardo S. Fortuno III.
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
Play Button
An Analytical Tool-box for Comprehensive Biochemical, Structural and Transcriptome Evaluation of Oral Biofilms Mediated by Mutans Streptococci
Authors: Marlise I. Klein, Jin Xiao, Arne Heydorn, Hyun Koo.
Institutions: University of Rochester Medical Center, Sichuan University, Glostrup Hospital, Glostrup, Denmark, University of Rochester Medical Center.
Biofilms are highly dynamic, organized and structured communities of microbial cells enmeshed in an extracellular matrix of variable density and composition 1, 2. In general, biofilms develop from initial microbial attachment on a surface followed by formation of cell clusters (or microcolonies) and further development and stabilization of the microcolonies, which occur in a complex extracellular matrix. The majority of biofilm matrices harbor exopolysaccharides (EPS), and dental biofilms are no exception; especially those associated with caries disease, which are mostly mediated by mutans streptococci 3. The EPS are synthesized by microorganisms (S. mutans, a key contributor) by means of extracellular enzymes, such as glucosyltransferases using sucrose primarily as substrate 3. Studies of biofilms formed on tooth surfaces are particularly challenging owing to their constant exposure to environmental challenges associated with complex diet-host-microbial interactions occurring in the oral cavity. Better understanding of the dynamic changes of the structural organization and composition of the matrix, physiology and transcriptome/proteome profile of biofilm-cells in response to these complex interactions would further advance the current knowledge of how oral biofilms modulate pathogenicity. Therefore, we have developed an analytical tool-box to facilitate biofilm analysis at structural, biochemical and molecular levels by combining commonly available and novel techniques with custom-made software for data analysis. Standard analytical (colorimetric assays, RT-qPCR and microarrays) and novel fluorescence techniques (for simultaneous labeling of bacteria and EPS) were integrated with specific software for data analysis to address the complex nature of oral biofilm research. The tool-box is comprised of 4 distinct but interconnected steps (Figure 1): 1) Bioassays, 2) Raw Data Input, 3) Data Processing, and 4) Data Analysis. We used our in vitro biofilm model and specific experimental conditions to demonstrate the usefulness and flexibility of the tool-box. The biofilm model is simple, reproducible and multiple replicates of a single experiment can be done simultaneously 4, 5. Moreover, it allows temporal evaluation, inclusion of various microbial species 5 and assessment of the effects of distinct experimental conditions (e.g. treatments 6; comparison of knockout mutants vs. parental strain 5; carbohydrates availability 7). Here, we describe two specific components of the tool-box, including (i) new software for microarray data mining/organization (MDV) and fluorescence imaging analysis (DUOSTAT), and (ii) in situ EPS-labeling. We also provide an experimental case showing how the tool-box can assist with biofilms analysis, data organization, integration and interpretation.
Microbiology, Issue 47, Extracellular matrix, polysaccharides, biofilm, mutans streptococci, glucosyltransferases, confocal fluorescence, microarray
Play Button
Development of a Unilaterally-lesioned 6-OHDA Mouse Model of Parkinson's Disease
Authors: Sherri L. Thiele, Ruth Warre, Joanne E. Nash.
Institutions: University of Toronto at Scarborough.
The unilaterally lesioned 6-hyroxydopamine (6-OHDA)-lesioned rat model of Parkinson's disease (PD) has proved to be invaluable in advancing our understanding of the mechanisms underlying parkinsonian symptoms, since it recapitulates the changes in basal ganglia circuitry and pharmacology observed in parkinsonian patients1-4. However, the precise cellular and molecular changes occurring at cortico-striatal synapses of the output pathways within the striatum, which is the major input region of the basal ganglia remain elusive, and this is believed to be site where pathological abnormalities underlying parkinsonian symptoms arise3,5. In PD, understanding the mechanisms underlying changes in basal ganglia circuitry following degeneration of the nigro-striatal pathway has been greatly advanced by the development of bacterial artificial chromosome (BAC) mice over-expressing green fluorescent proteins driven by promoters specific for the two striatal output pathways (direct pathway: eGFP-D1; indirect pathway: eGFP-D2 and eGFP-A2a)8, allowing them to be studied in isolation. For example, recent studies have suggested that there are pathological changes in synaptic plasticity in parkinsonian mice9,10. However, these studies utilised juvenile mice and acute models of parkinsonism. It is unclear whether the changes described in adult rats with stable 6-OHDA lesions also occur in these models. Other groups have attempted to generate a stable unilaterally-lesioned 6-OHDA adult mouse model of PD by lesioning the medial forebrain bundle (MFB), unfortunately, the mortality rate in this study was extremely high, with only 14% surviving the surgery for 21 days or longer11. More recent studies have generated intra-nigral lesions with both a low mortality rate >80% loss of dopaminergic neurons, however expression of L-DOPA induced dyskinesia11,12,13,14 was variable in these studies. Another well established mouse model of PD is the MPTP-lesioned mouse15. Whilst this model has proven useful in the assessment of potential neuroprotective agents16, it is less suitable for understanding mechanisms underlying symptoms of PD, as this model often fails to induce motor deficits, and shows a wide variability in the extent of lesion17, 18. Here we have developed a stable unilateral 6-OHDA-lesioned mouse model of PD by direct administration of 6-OHDA into the MFB, which consistently causes >95% loss of striatal dopamine (as measured by HPLC), as well as producing the behavioural imbalances observed in the well characterised unilateral 6-OHDA-lesioned rat model of PD. This newly developed mouse model of PD will prove a valuable tool in understanding the mechanisms underlying generation of parkinsonian symptoms.
Medicine, Issue 60, mouse, 6-OHDA, Parkinson’s disease, medial forebrain bundle, unilateral
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Play Button
Investigation of Early Plasma Evolution Induced by Ultrashort Laser Pulses
Authors: Wenqian Hu, Yung C. Shin, Galen B. King.
Institutions: Purdue University.
Early plasma is generated owing to high intensity laser irradiation of target and the subsequent target material ionization. Its dynamics plays a significant role in laser-material interaction, especially in the air environment1-11. Early plasma evolution has been captured through pump-probe shadowgraphy1-3 and interferometry1,4-7. However, the studied time frames and applied laser parameter ranges are limited. For example, direct examinations of plasma front locations and electron number densities within a delay time of 100 picosecond (ps) with respect to the laser pulse peak are still very few, especially for the ultrashort pulse of a duration around 100 femtosecond (fs) and a low power density around 1014 W/cm2. Early plasma generated under these conditions has only been captured recently with high temporal and spatial resolutions12. The detailed setup strategy and procedures of this high precision measurement will be illustrated in this paper. The rationale of the measurement is optical pump-probe shadowgraphy: one ultrashort laser pulse is split to a pump pulse and a probe pulse, while the delay time between them can be adjusted by changing their beam path lengths. The pump pulse ablates the target and generates the early plasma, and the probe pulse propagates through the plasma region and detects the non-uniformity of electron number density. In addition, animations are generated using the calculated results from the simulation model of Ref. 12 to illustrate the plasma formation and evolution with a very high resolution (0.04 ~ 1 ps). Both the experimental method and the simulation method can be applied to a broad range of time frames and laser parameters. These methods can be used to examine the early plasma generated not only from metals, but also from semiconductors and insulators.
Physics, Issue 65, Mechanical Engineering, Early plasma, air ionization, pump-probe shadowgraph, molecular dynamics, Monte Carlo, particle-in-cell
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
Play Button
Screening Foodstuffs for Class 1 Integrons and Gene Cassettes
Authors: Liette S. Waldron, Michael R. Gillings.
Institutions: Macquarie University.
Antibiotic resistance is one of the greatest threats to health in the 21st century. Acquisition of resistance genes via lateral gene transfer is a major factor in the spread of diverse resistance mechanisms. Amongst the DNA elements facilitating lateral transfer, the class 1 integrons have largely been responsible for spreading antibiotic resistance determinants amongst Gram negative pathogens. In total, these integrons have acquired and disseminated over 130 different antibiotic resistance genes. With continued antibiotic use, class 1 integrons have become ubiquitous in commensals and pathogens of humans and their domesticated animals. As a consequence, they can now be found in all human waste streams, where they continue to acquire new genes, and have the potential to cycle back into humans via the food chain. This protocol details a streamlined approach for detecting class 1 integrons and their associated resistance gene cassettes in foodstuffs, using culturing and PCR. Using this protocol, researchers should be able to: collect and prepare samples to make enriched cultures and screen for class 1 integrons; isolate single bacterial colonies to identify integron-positive isolates; identify bacterial species that contain class 1 integrons; and characterize these integrons and their associated gene cassettes.
Environmental Sciences, Issue 100, integron, lateral gene transfer, epidemiology, resistome, antibiotic resistance, pollution, xenogenetic
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.