JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Research on laser marking speed optimization by using genetic algorithm.
PUBLISHED: 05-09-2015
Laser Marking Machine is the most common coding equipment on product packaging lines. However, the speed of laser marking has become a bottleneck of production. In order to remove this bottleneck, a new method based on a genetic algorithm is designed. On the basis of this algorithm, a controller was designed and simulations and experiments were performed. The results show that using this algorithm could effectively improve laser marking efficiency by 25%.
Authors: Louise Lu, Volker Sick.
Published: 06-24-2013
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
24 Related JoVE Articles!
Play Button
Laser Capture Microdissection of Mammalian Tissue
Authors: Robert A Edwards.
Institutions: University of California, Irvine (UCI).
Laser capture microscopy, also known as laser microdissection (LMD), enables the user to isolate small numbers of cells or tissues from frozen or formalin-fixed, paraffin-embedded tissue sections. LMD techniques rely on a thermo labile membrane placed either on top of, or underneath, the tissue section. In one method, focused laser energy is used to melt the membrane onto the underlying cells, which can then be lifted out of the tissue section. In the other, the laser energy vaporizes the foil along a path "drawn" on the tissue, allowing the selected cells to fall into a collection device. Each technique allows the selection of cells with a minimum resolution of several microns. DNA, RNA, protein, and lipid samples may be isolated and analyzed from micro-dissected samples. In this video, we demonstrate the use of the Leica AS-LMD laser microdissection instrument in seven segments, including an introduction to the principles of LMD, initializing the instrument for use, general considerations for sample preparation, mounting the specimen and setting up capture tubes, aligning the microscope, adjusting the capture controls, and capturing tissue specimens. Laser-capture micro-dissection enables the investigator to isolate samples of pure cell populations as small as a few cell-equivalents. This allows the analysis of cells of interest that are free of neighboring contaminants, which may confound experimental results.
Issue 8, Basic Protocols, Laser Capture Microdissection, Microdissection Techniques, Leica
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
A Microscopic Phenotypic Assay for the Quantification of Intracellular Mycobacteria Adapted for High-throughput/High-content Screening
Authors: Christophe. J Queval, Ok-Ryul Song, Vincent Delorme, Raffaella Iantomasi, Romain Veyron-Churlet, Nathalie Deboosère, Valérie Landry, Alain Baulard, Priscille Brodin.
Institutions: Université de Lille.
Despite the availability of therapy and vaccine, tuberculosis (TB) remains one of the most deadly and widespread bacterial infections in the world. Since several decades, the sudden burst of multi- and extensively-drug resistant strains is a serious threat for the control of tuberculosis. Therefore, it is essential to identify new targets and pathways critical for the causative agent of the tuberculosis, Mycobacterium tuberculosis (Mtb) and to search for novel chemicals that could become TB drugs. One approach is to set up methods suitable for the genetic and chemical screens of large scale libraries enabling the search of a needle in a haystack. To this end, we developed a phenotypic assay relying on the detection of fluorescently labeled Mtb within fluorescently labeled host cells using automated confocal microscopy. This in vitro assay allows an image based quantification of the colonization process of Mtb into the host and was optimized for the 384-well microplate format, which is proper for screens of siRNA-, chemical compound- or Mtb mutant-libraries. The images are then processed for multiparametric analysis, which provides read out inferring on the pathogenesis of Mtb within host cells.
Infection, Issue 83, Mycobacterium tuberculosis, High-content/High-throughput screening, chemogenomics, Drug Discovery, siRNA library, automated confocal microscopy, image-based analysis
Play Button
Highly Resolved Intravital Striped-illumination Microscopy of Germinal Centers
Authors: Zoltan Cseresnyes, Laura Oehme, Volker Andresen, Anje Sporbert, Anja E. Hauser, Raluca Niesner.
Institutions: Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Leibniz Institute, LaVision Biotec GmbH, Charité - University of Medicine.
Monitoring cellular communication by intravital deep-tissue multi-photon microscopy is the key for understanding the fate of immune cells within thick tissue samples and organs in health and disease. By controlling the scanning pattern in multi-photon microscopy and applying appropriate numerical algorithms, we developed a striped-illumination approach, which enabled us to achieve 3-fold better axial resolution and improved signal-to-noise ratio, i.e. contrast, in more than 100 µm tissue depth within highly scattering tissue of lymphoid organs as compared to standard multi-photon microscopy. The acquisition speed as well as photobleaching and photodamage effects were similar to standard photo-multiplier-based technique, whereas the imaging depth was slightly lower due to the use of field detectors. By using the striped-illumination approach, we are able to observe the dynamics of immune complex deposits on secondary follicular dendritic cells – on the level of a few protein molecules in germinal centers.
Immunology, Issue 86, two-photon laser scanning microscopy, deep-tissue intravital imaging, germinal center, lymph node, high-resolution, enhanced contrast
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Measurement and Analysis of Atomic Hydrogen and Diatomic Molecular AlO, C2, CN, and TiO Spectra Following Laser-induced Optical Breakdown
Authors: Christian G. Parigger, Alexander C. Woods, Michael J. Witte, Lauren D. Swafford, David M. Surmick.
Institutions: University of Tennessee Space Institute.
In this work, we present time-resolved measurements of atomic and diatomic spectra following laser-induced optical breakdown. A typical LIBS arrangement is used. Here we operate a Nd:YAG laser at a frequency of 10 Hz at the fundamental wavelength of 1,064 nm. The 14 nsec pulses with anenergy of 190 mJ/pulse are focused to a 50 µm spot size to generate a plasma from optical breakdown or laser ablation in air. The microplasma is imaged onto the entrance slit of a 0.6 m spectrometer, and spectra are recorded using an 1,800 grooves/mm grating an intensified linear diode array and optical multichannel analyzer (OMA) or an ICCD. Of interest are Stark-broadened atomic lines of the hydrogen Balmer series to infer electron density. We also elaborate on temperature measurements from diatomic emission spectra of aluminum monoxide (AlO), carbon (C2), cyanogen (CN), and titanium monoxide (TiO). The experimental procedures include wavelength and sensitivity calibrations. Analysis of the recorded molecular spectra is accomplished by the fitting of data with tabulated line strengths. Furthermore, Monte-Carlo type simulations are performed to estimate the error margins. Time-resolved measurements are essential for the transient plasma commonly encountered in LIBS.
Physics, Issue 84, Laser Induced Breakdown Spectroscopy, Laser Ablation, Molecular Spectroscopy, Atomic Spectroscopy, Plasma Diagnostics
Play Button
SIVQ-LCM Protocol for the ArcturusXT Instrument
Authors: Jason D. Hipp, Jerome Cheng, Jeffrey C. Hanson, Avi Z. Rosenberg, Michael R. Emmert-Buck, Michael A. Tangrea, Ulysses J. Balis.
Institutions: National Institutes of Health, University of Michigan.
SIVQ-LCM is a new methodology that automates and streamlines the more traditional, user-dependent laser dissection process. It aims to create an advanced, rapidly customizable laser dissection platform technology. In this report, we describe the integration of the image analysis software Spatially Invariant Vector Quantization (SIVQ) onto the ArcturusXT instrument. The ArcturusXT system contains both an infrared (IR) and ultraviolet (UV) laser, allowing for specific cell or large area dissections. The principal goal is to improve the speed, accuracy, and reproducibility of the laser dissection to increase sample throughput. This novel approach facilitates microdissection of both animal and human tissues in research and clinical workflows.
Bioengineering, Issue 89, SIVQ, LCM, personalized medicine, digital pathology, image analysis, ArcturusXT
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Rapid Genotyping of Animals Followed by Establishing Primary Cultures of Brain Neurons
Authors: Jin-Young Koh, Sadahiro Iwabuchi, Zhengmin Huang, N. Charles Harata.
Institutions: University of Iowa Carver College of Medicine, University of Iowa Carver College of Medicine, EZ BioResearch LLC.
High-resolution analysis of the morphology and function of mammalian neurons often requires the genotyping of individual animals followed by the analysis of primary cultures of neurons. We describe a set of procedures for: labeling newborn mice to be genotyped, rapid genotyping, and establishing low-density cultures of brain neurons from these mice. Individual mice are labeled by tattooing, which allows for long-term identification lasting into adulthood. Genotyping by the described protocol is fast and efficient, and allows for automated extraction of nucleic acid with good reliability. This is useful under circumstances where sufficient time for conventional genotyping is not available, e.g., in mice that suffer from neonatal lethality. Primary neuronal cultures are generated at low density, which enables imaging experiments at high spatial resolution. This culture method requires the preparation of glial feeder layers prior to neuronal plating. The protocol is applied in its entirety to a mouse model of the movement disorder DYT1 dystonia (ΔE-torsinA knock-in mice), and neuronal cultures are prepared from the hippocampus, cerebral cortex and striatum of these mice. This protocol can be applied to mice with other genetic mutations, as well as to animals of other species. Furthermore, individual components of the protocol can be used for isolated sub-projects. Thus this protocol will have wide applications, not only in neuroscience but also in other fields of biological and medical sciences.
Neuroscience, Issue 95, AP2, genotyping, glial feeder layer, mouse tail, neuronal culture, nucleic-acid extraction, PCR, tattoo, torsinA
Play Button
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Authors: Sandra Zehentmeier, Zoltan Cseresnyes, Juan Escribano Navarro, Raluca A. Niesner, Anja E. Hauser.
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
Play Button
Imaging of Biological Tissues by Desorption Electrospray Ionization Mass Spectrometry
Authors: Rachel V. Bennett, Chaminda M. Gamage, Facundo M. Fernández.
Institutions: Georgia Institute of Technology.
Mass spectrometry imaging (MSI) provides untargeted molecular information with the highest specificity and spatial resolution for investigating biological tissues at the hundreds to tens of microns scale. When performed under ambient conditions, sample pre-treatment becomes unnecessary, thus simplifying the protocol while maintaining the high quality of information obtained. Desorption electrospray ionization (DESI) is a spray-based ambient MSI technique that allows for the direct sampling of surfaces in the open air, even in vivo. When used with a software-controlled sample stage, the sample is rastered underneath the DESI ionization probe, and through the time domain, m/z information is correlated with the chemical species' spatial distribution. The fidelity of the DESI-MSI output depends on the source orientation and positioning with respect to the sample surface and mass spectrometer inlet. Herein, we review how to prepare tissue sections for DESI imaging and additional experimental conditions that directly affect image quality. Specifically, we describe the protocol for the imaging of rat brain tissue sections by DESI-MSI.
Bioengineering, Issue 77, Molecular Biology, Biomedical Engineering, Chemistry, Biochemistry, Biophysics, Physics, Cellular Biology, Molecular Imaging, Mass Spectrometry, MS, MSI, Desorption electrospray ionization, DESI, Ambient mass spectrometry, tissue, sectioning, biomarker, imaging
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
Play Button
Laser-Induced Chronic Ocular Hypertension Model on SD Rats
Authors: Kin Chiu, Raymond Chang, Kwok-Fai So.
Institutions: The University of Hong Kong - HKU.
Glaucoma is one of the major causes of blindness in the world. Elevated intraocular pressure is a major risk factor. Laser photocoagulation induced ocular hypertension is one of the well established animal models. This video demonstrates how to induce ocular hypertension by Argon laser photocoagulation in rat.
Neuroscience, Issue 10, glaucoma, ocular hypertension, rat
Play Button
Measuring Diffusion Coefficients via Two-photon Fluorescence Recovery After Photobleaching
Authors: Kelley D. Sullivan, Edward B. Brown.
Institutions: University of Rochester, University of Rochester.
Multi-fluorescence recovery after photobleaching is a microscopy technique used to measure the diffusion coefficient (or analogous transport parameters) of macromolecules, and can be applied to both in vitro and in vivo biological systems. Multi-fluorescence recovery after photobleaching is performed by photobleaching a region of interest within a fluorescent sample using an intense laser flash, then attenuating the beam and monitoring the fluorescence as still-fluorescent molecules from outside the region of interest diffuse in to replace the photobleached molecules. We will begin our demonstration by aligning the laser beam through the Pockels Cell (laser modulator) and along the optical path through the laser scan box and objective lens to the sample. For simplicity, we will use a sample of aqueous fluorescent dye. We will then determine the proper experimental parameters for our sample including, monitor and bleaching powers, bleach duration, bin widths (for photon counting), and fluorescence recovery time. Next, we will describe the procedure for taking recovery curves, a process that can be largely automated via LabVIEW (National Instruments, Austin, TX) for enhanced throughput. Finally, the diffusion coefficient is determined by fitting the recovery data to the appropriate mathematical model using a least-squares fitting algorithm, readily programmable using software such as MATLAB (The Mathworks, Natick, MA).
Cellular Biology, Issue 36, Diffusion, fluorescence recovery after photobleaching, MP-FRAP, FPR, multi-photon
Play Button
Quantitative Real-Time PCR using the Thermo Scientific Solaris qPCR Assay
Authors: Christy Ogrean, Ben Jackson, James Covino.
Institutions: Thermo Scientific Solaris qPCR Products.
The Solaris qPCR Gene Expression Assay is a novel type of primer/probe set, designed to simplify the qPCR process while maintaining the sensitivity and accuracy of the assay. These primer/probe sets are pre-designed to >98% of the human and mouse genomes and feature significant improvements from previously available technologies. These improvements were made possible by virtue of a novel design algorithm, developed by Thermo Scientific bioinformatics experts. Several convenient features have been incorporated into the Solaris qPCR Assay to streamline the process of performing quantitative real-time PCR. First, the protocol is similar to commonly employed alternatives, so the methods used during qPCR are likely to be familiar. Second, the master mix is blue, which makes setting the qPCR reactions easier to track. Third, the thermal cycling conditions are the same for all assays (genes), making it possible to run many samples at a time and reducing the potential for error. Finally, the probe and primer sequence information are provided, simplifying the publication process. Here, we demonstrate how to obtain the appropriate Solaris reagents using the GENEius product search feature found on the ordering web site ( and how to use the Solaris reagents for performing qPCR using the standard curve method.
Cellular Biology, Issue 40, qPCR, probe, real-time PCR, molecular biology, Solaris, primer, gene expression assays
Play Button
Development of automated imaging and analysis for zebrafish chemical screens.
Authors: Andreas Vogt, Hiba Codore, Billy W. Day, Neil A. Hukriede, Michael Tsang.
Institutions: University of Pittsburgh Drug Discovery Institute, University of Pittsburgh, University of Pittsburgh, University of Pittsburgh.
We demonstrate the application of image-based high-content screening (HCS) methodology to identify small molecules that can modulate the FGF/RAS/MAPK pathway in zebrafish embryos. The zebrafish embryo is an ideal system for in vivo high-content chemical screens. The 1-day old embryo is approximately 1mm in diameter and can be easily arrayed into 96-well plates, a standard format for high throughput screening. During the first day of development, embryos are transparent with most of the major organs present, thus enabling visualization of tissue formation during embryogenesis. The complete automation of zebrafish chemical screens is still a challenge, however, particularly in the development of automated image acquisition and analysis. We previously generated a transgenic reporter line that expresses green fluorescent protein (GFP) under the control of FGF activity and demonstrated their utility in chemical screens 1. To establish methodology for high throughput whole organism screens, we developed a system for automated imaging and analysis of zebrafish embryos at 24-48 hours post fertilization (hpf) in 96-well plates 2. In this video we highlight the procedures for arraying transgenic embryos into multiwell plates at 24hpf and the addition of a small molecule (BCI) that hyperactivates FGF signaling 3. The plates are incubated for 6 hours followed by the addition of tricaine to anesthetize larvae prior to automated imaging on a Molecular Devices ImageXpress Ultra laser scanning confocal HCS reader. Images are processed by Definiens Developer software using a Cognition Network Technology algorithm that we developed to detect and quantify expression of GFP in the heads of transgenic embryos. In this example we highlight the ability of the algorithm to measure dose-dependent effects of BCI on GFP reporter gene expression in treated embryos.
Cellular Biology, Issue 40, Zebrafish, Chemical Screens, Cognition Network Technology, Fibroblast Growth Factor, (E)-2-benzylidene-3-(cyclohexylamino)-2,3-dihydro-1H-inden-1-one (BCI),Tg(dusp6:d2EGFP)
Play Button
How to Build a Laser Speckle Contrast Imaging (LSCI) System to Monitor Blood Flow
Authors: Adrien Ponticorvo, Andrew K. Dunn.
Institutions: University of Texas at Austin.
Laser Speckle Contrast Imaging (LSCI) is a simple yet powerful technique that is used for full-field imaging of blood flow. The technique analyzes fluctuations in a dynamic speckle pattern to detect the movement of particles similar to how laser Doppler analyzes frequency shifts to determine particle speed. Because it can be used to monitor the movement of red blood cells, LSCI has become a popular tool for measuring blood flow in tissues such as the retina, skin, and brain. It has become especially useful in neuroscience where blood flow changes during physiological events like functional activation, stroke, and spreading depolarization can be quantified. LSCI is also attractive because it provides excellent spatial and temporal resolution while using inexpensive instrumentation that can easily be combined with other imaging modalities. Here we show how to build a LSCI setup and demonstrate its ability to monitor blood flow changes in the brain during an animal experiment.
Neuroscience, Issue 45, blood flow, optical imaging, laser speckle, brain, rat
Play Button
Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro
Authors: Aya D. Pusic, Yelena Y. Grinberg, Heidi M. Mitchell, Richard P. Kraig.
Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center.
Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic and modulate SD immune signaling.
Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma
Play Button
Optical Frequency Domain Imaging of Ex vivo Pulmonary Resection Specimens: Obtaining One to One Image to Histopathology Correlation
Authors: Lida P. Hariri, Matthew B. Applegate, Mari Mino-Kenudson, Eugene J. Mark, Brett E. Bouma, Guillermo J. Tearney, Melissa J. Suter.
Institutions: Harvard Medical School, Massachusetts General Hospital, Harvard Medical School, Massachusetts General Hospital, Harvard Medical School.
Lung cancer is the leading cause of cancer-related deaths1. Squamous cell and small cell cancers typically arise in association with the conducting airways, whereas adenocarcinomas are typically more peripheral in location. Lung malignancy detection early in the disease process may be difficult due to several limitations: radiological resolution, bronchoscopic limitations in evaluating tissue underlying the airway mucosa and identifying early pathologic changes, and small sample size and/or incomplete sampling in histology biopsies. High resolution imaging modalities, such as optical frequency domain imaging (OFDI), provide non-destructive, large area 3-dimensional views of tissue microstructure to depths approaching 2 mm in real time (Figure 1)2-6. OFDI has been utilized in a variety of applications, including evaluation of coronary artery atherosclerosis6,7 and esophageal intestinal metaplasia and dysplasia6,8-10. Bronchoscopic OCT/OFDI has been demonstrated as a safe in vivo imaging tool for evaluating the pulmonary airways11-23 (Animation). OCT has been assessed in pulmonary airways16,23 and parenchyma17,22 of animal models and in vivo human airway14,15. OCT imaging of normal airway has demonstrated visualization of airway layering and alveolar attachments, and evaluation of dysplastic lesions has been found useful in distinguishing grades of dysplasia in the bronchial mucosa11,12,20,21. OFDI imaging of bronchial mucosa has been demonstrated in a short bronchial segment (0.8 cm)18. Additionally, volumetric OFDI spanning multiple airway generations in swine and human pulmonary airways in vivo has been described19. Endobronchial OCT/OFDI is typically performed using thin, flexible catheters, which are compatible with standard bronchoscopic access ports. Additionally, OCT and OFDI needle-based probes have recently been developed, which may be used to image regions of the lung beyond the airway wall or pleural surface17. While OCT/OFDI has been utilized and demonstrated as feasible for in vivo pulmonary imaging, no studies with precisely matched one-to-one OFDI:histology have been performed. Therefore, specific imaging criteria for various pulmonary pathologies have yet to be developed. Histopathological counterparts obtained in vivo consist of only small biopsy fragments, which are difficult to correlate with large OFDI datasets. Additionally, they do not provide the comprehensive histology needed for registration with large volume OFDI. As a result, specific imaging features of pulmonary pathology cannot be developed in the in vivo setting. Precisely matched, one-to-one OFDI and histology correlation is vital to accurately evaluate features seen in OFDI against histology as a gold standard in order to derive specific image interpretation criteria for pulmonary neoplasms and other pulmonary pathologies. Once specific imaging criteria have been developed and validated ex vivo with matched one-to-one histology, the criteria may then be applied to in vivo imaging studies. Here, we present a method for precise, one to one correlation between high resolution optical imaging and histology in ex vivo lung resection specimens. Throughout this manuscript, we describe the techniques used to match OFDI images to histology. However, this method is not specific to OFDI and can be used to obtain histology-registered images for any optical imaging technique. We performed airway centered OFDI with a specialized custom built bronchoscopic 2.4 French (0.8 mm diameter) catheter. Tissue samples were marked with tissue dye, visible in both OFDI and histology. Careful orientation procedures were used to precisely correlate imaging and histological sampling locations. The techniques outlined in this manuscript were used to conduct the first demonstration of volumetric OFDI with precise correlation to tissue-based diagnosis for evaluating pulmonary pathology24. This straightforward, effective technique may be extended to other tissue types to provide precise imaging to histology correlation needed to determine fine imaging features of both normal and diseased tissues.
Bioengineering, Issue 71, Medicine, Biomedical Engineering, Anatomy, Physiology, Cancer Biology, Pathology, Surgery, Bronchoscopic imaging, In vivo optical microscopy, Optical imaging, Optical coherence tomography, Optical frequency domain imaging, Histology correlation, animal model, histopathology, airway, lung, biopsy, imaging
Play Button
Optical Recording of Suprathreshold Neural Activity with Single-cell and Single-spike Resolution
Authors: Gayathri Nattar Ranganathan, Helmut J. Koester.
Institutions: The University of Texas at Austin.
Signaling of information in the vertebrate central nervous system is often carried by populations of neurons rather than individual neurons. Also propagation of suprathreshold spiking activity involves populations of neurons. Empirical studies addressing cortical function directly thus require recordings from populations of neurons with high resolution. Here we describe an optical method and a deconvolution algorithm to record neural activity from up to 100 neurons with single-cell and single-spike resolution. This method relies on detection of the transient increases in intracellular somatic calcium concentration associated with suprathreshold electrical spikes (action potentials) in cortical neurons. High temporal resolution of the optical recordings is achieved by a fast random-access scanning technique using acousto-optical deflectors (AODs)1. Two-photon excitation of the calcium-sensitive dye results in high spatial resolution in opaque brain tissue2. Reconstruction of spikes from the fluorescence calcium recordings is achieved by a maximum-likelihood method. Simultaneous electrophysiological and optical recordings indicate that our method reliably detects spikes (>97% spike detection efficiency), has a low rate of false positive spike detection (< 0.003 spikes/sec), and a high temporal precision (about 3 msec) 3. This optical method of spike detection can be used to record neural activity in vitro and in anesthetized animals in vivo3,4.
Neuroscience, Issue 67, functional calcium imaging, spatiotemporal patterns of activity, dithered random-access scanning
Play Button
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Authors: Tadd T. Truscott, Jesse Belden, Joseph R. Nielson, David J. Daily, Scott L. Thomson.
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Morris Water Maze Test: Optimization for Mouse Strain and Testing Environment
Authors: Daniel S. Weitzner, Elizabeth B. Engler-Chiurazzi, Linda A. Kotilinek, Karen Hsiao Ashe, Miranda Nicole Reed.
Institutions: West Virginia University, West Virginia University, N. Bud Grossman Center for Memory Research and Care, University of Minnesota, N. Bud Grossman Center for Memory Research and Care, University of Minnesota, GRECC, VA Medical Center, West Virginia University.
The Morris water maze (MWM) is a commonly used task to assess hippocampal-dependent spatial learning and memory in transgenic mouse models of disease, including neurocognitive disorders such as Alzheimer’s disease. However, the background strain of the mouse model used can have a substantial effect on the observed behavioral phenotype, with some strains exhibiting superior learning ability relative to others. To ensure differences between transgene negative and transgene positive mice can be detected, identification of a training procedure sensitive to the background strain is essential. Failure to tailor the MWM protocol to the background strain of the mouse model may lead to under- or over- training, thereby masking group differences in probe trials. Here, a MWM protocol tailored for use with the F1 FVB/N x 129S6 background is described. This is a frequently used background strain to study the age-dependent effects of mutant P301L tau (rTg(TauP301L)4510 mice) on the memory deficits associated with Alzheimer’s disease. Also described is a strategy to re-optimize, as dictated by the particular testing environment utilized.
Behavior, Issue 100, Spatial learning, spatial reference memory, Morris water maze, Alzheimer’s disease, behavior, tau, hippocampal-dependent learning, rTg4510, Tg2576, strain background, transgenic mouse models
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.