JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Craniofacial reconstruction using rational cubic ball curves.
PUBLISHED: 04-17-2015
This paper proposes the reconstruction of craniofacial fracture using rational cubic Ball curve. The idea of choosing Ball curve is based on its robustness of computing efficiency over Bezier curve. The main steps are conversion of Digital Imaging and Communications in Medicine (Dicom) images to binary images, boundary extraction and corner point detection, Ball curve fitting with genetic algorithm and final solution conversion to Dicom format. The last section illustrates a real case of craniofacial reconstruction using the proposed method which clearly indicates the applicability of this method. A Graphical User Interface (GUI) has also been developed for practical application.
Authors: Ed Lim, Kshitij Modi, Anna Christensen, Jeff Meganck, Stephen Oldfield, Ning Zhang.
Published: 04-14-2011
Following intracardiac delivery of MDA-MB-231-luc-D3H2LN cells to Nu/Nu mice, systemic metastases developed in the injected animals. Bioluminescence imaging using IVIS Spectrum was employed to monitor the distribution and development of the tumor cells following the delivery procedure including DLIT reconstruction to measure the tumor signal and its location. Development of metastatic lesions to the bone tissues triggers osteolytic activity and lesions to tibia and femur were evaluated longitudinally using micro CT. Imaging was performed using a Quantum FX micro CT system with fast imaging and low X-ray dose. The low radiation dose allows multiple imaging sessions to be performed with a cumulative X-ray dosage far below LD50. A mouse imaging shuttle device was used to sequentially image the mice with both IVIS Spectrum and Quantum FX achieving accurate animal positioning in both the bioluminescence and CT images. The optical and CT data sets were co-registered in 3-dimentions using the Living Image 4.1 software. This multi-mode approach allows close monitoring of tumor growth and development simultaneously with osteolytic activity.
26 Related JoVE Articles!
Play Button
Simulation of the Planetary Interior Differentiation Processes in the Laboratory
Authors: Yingwei Fei.
Institutions: Carnegie Institution of Washington.
A planetary interior is under high-pressure and high-temperature conditions and it has a layered structure. There are two important processes that led to that layered structure, (1) percolation of liquid metal in a solid silicate matrix by planet differentiation, and (2) inner core crystallization by subsequent planet cooling. We conduct high-pressure and high-temperature experiments to simulate both processes in the laboratory. Formation of percolative planetary core depends on the efficiency of melt percolation, which is controlled by the dihedral (wetting) angle. The percolation simulation includes heating the sample at high pressure to a target temperature at which iron-sulfur alloy is molten while the silicate remains solid, and then determining the true dihedral angle to evaluate the style of liquid migration in a crystalline matrix by 3D visualization. The 3D volume rendering is achieved by slicing the recovered sample with a focused ion beam (FIB) and taking SEM image of each slice with a FIB/SEM crossbeam instrument. The second set of experiments is designed to understand the inner core crystallization and element distribution between the liquid outer core and solid inner core by determining the melting temperature and element partitioning at high pressure. The melting experiments are conducted in the multi-anvil apparatus up to 27 GPa and extended to higher pressure in the diamond-anvil cell with laser-heating. We have developed techniques to recover small heated samples by precision FIB milling and obtain high-resolution images of the laser-heated spot that show melting texture at high pressure. By analyzing the chemical compositions of the coexisting liquid and solid phases, we precisely determine the liquidus curve, providing necessary data to understand the inner core crystallization process.
Physics, Issue 81, Geophysics, Planetary Science, Geochemistry, Planetary interior, high-pressure, planet differentiation, 3D tomography
Play Button
Ultrasonic Assessment of Myocardial Microstructure
Authors: Pranoti Hiremath, Michael Bauer, Hui-Wen Cheng, Kazumasa Unno, Ronglih Liao, Susan Cheng.
Institutions: Harvard Medical School, Brigham and Women's Hospital, Harvard Medical School.
Echocardiography is a widely accessible imaging modality that is commonly used to noninvasively characterize and quantify changes in cardiac structure and function. Ultrasonic assessments of cardiac tissue can include analyses of backscatter signal intensity within a given region of interest. Previously established techniques have relied predominantly on the integrated or mean value of backscatter signal intensities, which may be susceptible to variability from aliased data from low frame rates and time delays for algorithms based on cyclic variation. Herein, we describe an ultrasound-based imaging algorithm that extends from previous methods, can be applied to a single image frame and accounts for the full distribution of signal intensity values derived from a given myocardial sample. When applied to representative mouse and human imaging data, the algorithm distinguishes between subjects with and without exposure to chronic afterload resistance. The algorithm offers an enhanced surrogate measure of myocardial microstructure and can be performed using open-access image analysis software.
Medicine, Issue 83, echocardiography, image analysis, myocardial fibrosis, hypertension, cardiac cycle, open-access image analysis software
Play Button
Reconstruction of 3-Dimensional Histology Volume and its Application to Study Mouse Mammary Glands
Authors: Rushin Shojaii, Stephanie Bacopulos, Wenyi Yang, Tigran Karavardanyan, Demetri Spyropoulos, Afshin Raouf, Anne Martel, Arun Seth.
Institutions: University of Toronto, Sunnybrook Research Institute, University of Toronto, Sunnybrook Research Institute, Medical University of South Carolina, University of Manitoba.
Histology volume reconstruction facilitates the study of 3D shape and volume change of an organ at the level of macrostructures made up of cells. It can also be used to investigate and validate novel techniques and algorithms in volumetric medical imaging and therapies. Creating 3D high-resolution atlases of different organs1,2,3 is another application of histology volume reconstruction. This provides a resource for investigating tissue structures and the spatial relationship between various cellular features. We present an image registration approach for histology volume reconstruction, which uses a set of optical blockface images. The reconstructed histology volume represents a reliable shape of the processed specimen with no propagated post-processing registration error. The Hematoxylin and Eosin (H&E) stained sections of two mouse mammary glands were registered to their corresponding blockface images using boundary points extracted from the edges of the specimen in histology and blockface images. The accuracy of the registration was visually evaluated. The alignment of the macrostructures of the mammary glands was also visually assessed at high resolution. This study delineates the different steps of this image registration pipeline, ranging from excision of the mammary gland through to 3D histology volume reconstruction. While 2D histology images reveal the structural differences between pairs of sections, 3D histology volume provides the ability to visualize the differences in shape and volume of the mammary glands.
Bioengineering, Issue 89, Histology Volume Reconstruction, Transgenic Mouse Model, Image Registration, Digital Histology, Image Processing, Mouse Mammary Gland
Play Button
A Novel Stretching Platform for Applications in Cell and Tissue Mechanobiology
Authors: Dominique Tremblay, Charles M. Cuerrier, Lukasz Andrzejewski, Edward R. O'Brien, Andrew E. Pelling.
Institutions: University of Ottawa, University of Ottawa, University of Calgary, University of Ottawa, University of Ottawa.
Tools that allow the application of mechanical forces to cells and tissues or that can quantify the mechanical properties of biological tissues have contributed dramatically to the understanding of basic mechanobiology. These techniques have been extensively used to demonstrate how the onset and progression of various diseases are heavily influenced by mechanical cues. This article presents a multi-functional biaxial stretching (BAXS) platform that can either mechanically stimulate single cells or quantify the mechanical stiffness of tissues. The BAXS platform consists of four voice coil motors that can be controlled independently. Single cells can be cultured on a flexible substrate that can be attached to the motors allowing one to expose the cells to complex, dynamic, and spatially varying strain fields. Conversely, by incorporating a force load cell, one can also quantify the mechanical properties of primary tissues as they are exposed to deformation cycles. In both cases, a proper set of clamps must be designed and mounted to the BAXS platform motors in order to firmly hold the flexible substrate or the tissue of interest. The BAXS platform can be mounted on an inverted microscope to perform simultaneous transmitted light and/or fluorescence imaging to examine the structural or biochemical response of the sample during stretching experiments. This article provides experimental details of the design and usage of the BAXS platform and presents results for single cell and whole tissue studies. The BAXS platform was used to measure the deformation of nuclei in single mouse myoblast cells in response to substrate strain and to measure the stiffness of isolated mouse aortas. The BAXS platform is a versatile tool that can be combined with various optical microscopies in order to provide novel mechanobiological insights at the sub-cellular, cellular and whole tissue levels.
Bioengineering, Issue 88, cell stretching, tissue mechanics, nuclear mechanics, uniaxial, biaxial, anisotropic, mechanobiology
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Play Button
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Authors: Wenjin Chen, Chung Wong, Evan Vosburgh, Arnold J. Levine, David J. Foran, Eugenia Y. Xu.
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Authors: Sandra Zehentmeier, Zoltan Cseresnyes, Juan Escribano Navarro, Raluca A. Niesner, Anja E. Hauser.
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
Play Button
Preparation and Reactivity of Gasless Nanostructured Energetic Materials
Authors: Khachatur V. Manukyan, Christopher E. Shuck, Alexander S. Rogachev, Alexander S. Mukasyan.
Institutions: University of Notre Dame, University of Notre Dame, National University of Science and Technology, "MISIS".
High-Energy Ball Milling (HEBM) is a ball milling process where a powder mixture placed in the ball mill is subjected to high-energy collisions from the balls. Among other applications, it is a versatile technique that allows for effective preparation of gasless reactive nanostructured materials with high energy density per volume (Ni+Al, Ta+C, Ti+C). The structural transformations of reactive media, which take place during HEBM, define the reaction mechanism in the produced energetic composites. Varying the processing conditions permits fine tuning of the milling-induced microstructures of the fabricated composite particles. In turn, the reactivity, i.e., self-ignition temperature, ignition delay time, as well as reaction kinetics, of high energy density materials depends on its microstructure. Analysis of the milling-induced microstructures suggests that the formation of fresh oxygen-free intimate high surface area contacts between the reagents is responsible for the enhancement of their reactivity. This manifests itself in a reduction of ignition temperature and delay time, an increased rate of chemical reaction, and an overall decrease of the effective activation energy of the reaction. The protocol provides a detailed description for the preparation of reactive nanocomposites with tailored microstructure using short-term HEBM method. It also describes a high-speed thermal imaging technique to determine the ignition/combustion characteristics of the energetic materials. The protocol can be adapted to preparation and characterization of a variety of nanostructured energetic composites.
Engineering, Issue 98, Reactive composites, Energetic materials, High-Energy Ball Milling, Gasless Combustion, Ignition, Reactivity Enhancement
Play Button
Facial Transplants in Xenopus laevis Embryos
Authors: Laura A. Jacox, Amanda J. Dickinson, Hazel Sive.
Institutions: Harvard University, Massachusetts Institute of Technology, Massachusetts Institute of Technology, Virginia Commonwealth University.
Craniofacial birth defects occur in 1 out of every 700 live births, but etiology is rarely known due to limited understanding of craniofacial development. To identify where signaling pathways and tissues act during patterning of the developing face, a 'face transplant' technique has been developed in embryos of the frog Xenopus laevis. A region of presumptive facial tissue (the "Extreme Anterior Domain" (EAD)) is removed from a donor embryo at tailbud stage, and transplanted to a host embryo of the same stage, from which the equivalent region has been removed. This can be used to generate a chimeric face where the host or donor tissue has a loss or gain of function in a gene, and/or includes a lineage label. After healing, the outcome of development is monitored, and indicates roles of the signaling pathway within the donor or surrounding host tissues. Xenopus is a valuable model for face development, as the facial region is large and readily accessible for micromanipulation. Many embryos can be assayed, over a short time period since development occurs rapidly. Findings in the frog are relevant to human development, since craniofacial processes appear conserved between Xenopus and mammals.
Developmental Biology, Issue 85, craniofacial development, neural crest, Mouth, Nostril, transplantation, Xenopus
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
X-ray Dose Reduction through Adaptive Exposure in Fluoroscopic Imaging
Authors: Steve Burion, Tobias Funk.
Institutions: Triple Ring Technologies.
X-ray fluoroscopy is widely used for image guidance during cardiac intervention. However, radiation dose in these procedures can be high, and this is a significant concern, particularly in pediatric applications. Pediatrics procedures are in general much more complex than those performed on adults and thus are on average four to eight times longer1. Furthermore, children can undergo up to 10 fluoroscopic procedures by the age of 10, and have been shown to have a three-fold higher risk of developing fatal cancer throughout their life than the general population2,3. We have shown that radiation dose can be significantly reduced in adult cardiac procedures by using our scanning beam digital x-ray (SBDX) system4-- a fluoroscopic imaging system that employs an inverse imaging geometry5,6 (Figure 1, Movie 1 and Figure 2). Instead of a single focal spot and an extended detector as used in conventional systems, our approach utilizes an extended X-ray source with multiple focal spots focused on a small detector. Our X-ray source consists of a scanning electron beam sequentially illuminating up to 9,000 focal spot positions. Each focal spot projects a small portion of the imaging volume onto the detector. In contrast to a conventional system where the final image is directly projected onto the detector, the SBDX uses a dedicated algorithm to reconstruct the final image from the 9,000 detector images. For pediatric applications, dose savings with the SBDX system are expected to be smaller than in adult procedures. However, the SBDX system allows for additional dose savings by implementing an electronic adaptive exposure technique. Key to this method is the multi-beam scanning technique of the SBDX system: rather than exposing every part of the image with the same radiation dose, we can dynamically vary the exposure depending on the opacity of the region exposed. Therefore, we can significantly reduce exposure in radiolucent areas and maintain exposure in more opaque regions. In our current implementation, the adaptive exposure requires user interaction (Figure 3). However, in the future, the adaptive exposure will be real time and fully automatic. We have performed experiments with an anthropomorphic phantom and compared measured radiation dose with and without adaptive exposure using a dose area product (DAP) meter. In the experiment presented here, we find a dose reduction of 30%.
Bioengineering, Issue 55, Scanning digital X-ray, fluoroscopy, pediatrics, interventional cardiology, adaptive exposure, dose savings
Play Button
How to Measure Cortical Folding from MR Images: a Step-by-Step Tutorial to Compute Local Gyrification Index
Authors: Marie Schaer, Meritxell Bach Cuadra, Nick Schmansky, Bruce Fischl, Jean-Philippe Thiran, Stephan Eliez.
Institutions: University of Geneva School of Medicine, École Polytechnique Fédérale de Lausanne, University Hospital Center and University of Lausanne, Massachusetts General Hospital.
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues2, several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry3. These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface4. The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index5, a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI1), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion6, our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study1. This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues7, where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Medicine, Issue 59, neuroimaging, brain, cortical complexity, cortical development
Play Button
Harvesting and Cryo-cooling Crystals of Membrane Proteins Grown in Lipidic Mesophases for Structure Determination by Macromolecular Crystallography
Authors: Dianfan Li, Coilín Boland, David Aragao, Kilian Walsh, Martin Caffrey.
Institutions: Trinity College Dublin .
An important route to understanding how proteins function at a mechanistic level is to have the structure of the target protein available, ideally at atomic resolution. Presently, there is only one way to capture such information as applied to integral membrane proteins (Figure 1), and the complexes they form, and that method is macromolecular X-ray crystallography (MX). To do MX diffraction quality crystals are needed which, in the case of membrane proteins, do not form readily. A method for crystallizing membrane proteins that involves the use of lipidic mesophases, specifically the cubic and sponge phases1-5, has gained considerable attention of late due to the successes it has had in the G protein-coupled receptor field6-21 ( However, the method, henceforth referred to as the in meso or lipidic cubic phase method, comes with its own technical challenges. These arise, in part, due to the generally viscous and sticky nature of the lipidic mesophase in which the crystals, which are often micro-crystals, grow. Manipulating crystals becomes difficult as a result and particularly so during harvesting22,23. Problems arise too at the step that precedes harvesting which requires that the glass sandwich plates in which the crystals grow (Figure 2)24,25 are opened to expose the mesophase bolus, and the crystals therein, for harvesting, cryo-cooling and eventual X-ray diffraction data collection. The cubic and sponge mesophase variants (Figure 3) from which crystals must be harvested have profoundly different rheologies4,26. The cubic phase is viscous and sticky akin to a thick toothpaste. By contrast, the sponge phase is more fluid with a distinct tendency to flow. Accordingly, different approaches for opening crystallization wells containing crystals growing in the cubic and the sponge phase are called for as indeed different methods are required for harvesting crystals from the two mesophase types. Protocols for doing just that have been refined and implemented in the Membrane Structural and Functional Biology (MS&FB) Group, and are described in detail in this JoVE article (Figure 4). Examples are given of situations where crystals are successfully harvested and cryo-cooled. We also provide examples of cases where problems arise that lead to the irretrievable loss of crystals and describe how these problems can be avoided. In this article the Viewer is provided with step-by-step instructions for opening glass sandwich crystallization wells, for harvesting and for cryo-cooling crystals of membrane proteins growing in cubic and in sponge phases.
Materials Science, Issue 67, crystallization, glass sandwich plates, GPCR, harvesting, in meso, LCP, lipidic mesophases, macromolecular X-ray crystallography, membrane protein
Play Button
Co-analysis of Brain Structure and Function using fMRI and Diffusion-weighted Imaging
Authors: Jeffrey S. Phillips, Adam S. Greenberg, John A. Pyles, Sudhir K. Pathak, Marlene Behrmann, Walter Schneider, Michael J. Tarr.
Institutions: Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University , University of Pittsburgh.
The study of complex computational systems is facilitated by network maps, such as circuit diagrams. Such mapping is particularly informative when studying the brain, as the functional role that a brain area fulfills may be largely defined by its connections to other brain areas. In this report, we describe a novel, non-invasive approach for relating brain structure and function using magnetic resonance imaging (MRI). This approach, a combination of structural imaging of long-range fiber connections and functional imaging data, is illustrated in two distinct cognitive domains, visual attention and face perception. Structural imaging is performed with diffusion-weighted imaging (DWI) and fiber tractography, which track the diffusion of water molecules along white-matter fiber tracts in the brain (Figure 1). By visualizing these fiber tracts, we are able to investigate the long-range connective architecture of the brain. The results compare favorably with one of the most widely-used techniques in DWI, diffusion tensor imaging (DTI). DTI is unable to resolve complex configurations of fiber tracts, limiting its utility for constructing detailed, anatomically-informed models of brain function. In contrast, our analyses reproduce known neuroanatomy with precision and accuracy. This advantage is partly due to data acquisition procedures: while many DTI protocols measure diffusion in a small number of directions (e.g., 6 or 12), we employ a diffusion spectrum imaging (DSI)1, 2 protocol which assesses diffusion in 257 directions and at a range of magnetic gradient strengths. Moreover, DSI data allow us to use more sophisticated methods for reconstructing acquired data. In two experiments (visual attention and face perception), tractography reveals that co-active areas of the human brain are anatomically connected, supporting extant hypotheses that they form functional networks. DWI allows us to create a "circuit diagram" and reproduce it on an individual-subject basis, for the purpose of monitoring task-relevant brain activity in networks of interest.
Neuroscience, Issue 69, Molecular Biology, Anatomy, Physiology, tractography, connectivity, neuroanatomy, white matter, magnetic resonance imaging, MRI
Play Button
Echo Particle Image Velocimetry
Authors: Nicholas DeMarchi, Christopher White.
Institutions: University of New Hampshire.
The transport of mass, momentum, and energy in fluid flows is ultimately determined by spatiotemporal distributions of the fluid velocity field.1 Consequently, a prerequisite for understanding, predicting, and controlling fluid flows is the capability to measure the velocity field with adequate spatial and temporal resolution.2 For velocity measurements in optically opaque fluids or through optically opaque geometries, echo particle image velocimetry (EPIV) is an attractive diagnostic technique to generate "instantaneous" two-dimensional fields of velocity.3,4,5,6 In this paper, the operating protocol for an EPIV system built by integrating a commercial medical ultrasound machine7 with a PC running commercial particle image velocimetry (PIV) software8 is described, and validation measurements in Hagen-Poiseuille (i.e., laminar pipe) flow are reported. For the EPIV measurements, a phased array probe connected to the medical ultrasound machine is used to generate a two-dimensional ultrasound image by pulsing the piezoelectric probe elements at different times. Each probe element transmits an ultrasound pulse into the fluid, and tracer particles in the fluid (either naturally occurring or seeded) reflect ultrasound echoes back to the probe where they are recorded. The amplitude of the reflected ultrasound waves and their time delay relative to transmission are used to create what is known as B-mode (brightness mode) two-dimensional ultrasound images. Specifically, the time delay is used to determine the position of the scatterer in the fluid and the amplitude is used to assign intensity to the scatterer. The time required to obtain a single B-mode image, t, is determined by the time it take to pulse all the elements of the phased array probe. For acquiring multiple B-mode images, the frame rate of the system in frames per second (fps) = 1/δt. (See 9 for a review of ultrasound imaging.) For a typical EPIV experiment, the frame rate is between 20-60 fps, depending on flow conditions, and 100-1000 B-mode images of the spatial distribution of the tracer particles in the flow are acquired. Once acquired, the B-mode ultrasound images are transmitted via an ethernet connection to the PC running the PIV commercial software. Using the PIV software, tracer particle displacement fields, D(x,y)[pixels], (where x and y denote horizontal and vertical spatial position in the ultrasound image, respectively) are acquired by applying cross correlation algorithms to successive ultrasound B-mode images.10 The velocity fields, u(x,y)[m/s], are determined from the displacements fields, knowing the time step between image pairs, ΔT[s], and the image magnification, M[meter/pixel], i.e., u(x,y) = MD(x,y)/ΔT. The time step between images ΔT = 1/fps + D(x,y)/B, where B[pixels/s] is the time it takes for the ultrasound probe to sweep across the image width. In the present study, M = 77[μm/pixel], fps = 49.5[1/s], and B = 25,047[pixels/s]. Once acquired, the velocity fields can be analyzed to compute flow quantities of interest.
Mechanical Engineering, Issue 70, Physics, Engineering, Physical Sciences, Ultrasound, cross correlation, velocimetry, opaque fluids, particle, flow, fluid, EPIV
Play Button
Molecular Beam Mass Spectrometry With Tunable Vacuum Ultraviolet (VUV) Synchrotron Radiation
Authors: Amir Golan, Musahid Ahmed.
Institutions: Lawrence Berkeley National Laboratory.
Tunable soft ionization coupled to mass spectroscopy is a powerful method to investigate isolated molecules, complexes and clusters and their spectroscopy and dynamics1-4. Fundamental studies of photoionization processes of biomolecules provide information about the electronic structure of these systems. Furthermore determinations of ionization energies and other properties of biomolecules in the gas phase are not trivial, and these experiments provide a platform to generate these data. We have developed a thermal vaporization technique coupled with supersonic molecular beams that provides a gentle way to transport these species into the gas phase. Judicious combination of source gas and temperature allows for formation of dimers and higher clusters of the DNA bases. The focus of this particular work is on the effects of non-covalent interactions, i.e., hydrogen bonding, stacking, and electrostatic interactions, on the ionization energies and proton transfer of individual biomolecules, their complexes and upon micro-hydration by water1, 5-9. We have performed experimental and theoretical characterization of the photoionization dynamics of gas-phase uracil and 1,3-dimethyluracil dimers using molecular beams coupled with synchrotron radiation at the Chemical Dynamics Beamline10 located at the Advanced Light Source and the experimental details are visualized here. This allowed us to observe the proton transfer in 1,3-dimethyluracil dimers, a system with pi stacking geometry and with no hydrogen bonds1. Molecular beams provide a very convenient and efficient way to isolate the sample of interest from environmental perturbations which in return allows accurate comparison with electronic structure calculations11, 12. By tuning the photon energy from the synchrotron, a photoionization efficiency (PIE) curve can be plotted which informs us about the cationic electronic states. These values can then be compared to theoretical models and calculations and in turn, explain in detail the electronic structure and dynamics of the investigated species 1, 3.
Physics, Issue 68, mass spectroscopy (application), physical chemistry, radiation chemistry, molecular beams, molecular physics, molecular structure, photon interactions with atoms and molecules, Molecular beam, mass spectrometry, vacuum ultraviolet, synchrotron radiation, proton transfer, DNA bases, clusters
Play Button
Characterization of Surface Modifications by White Light Interferometry: Applications in Ion Sputtering, Laser Ablation, and Tribology Experiments
Authors: Sergey V. Baryshev, Robert A. Erck, Jerry F. Moore, Alexander V. Zinovev, C. Emil Tripa, Igor V. Veryovkin.
Institutions: Argonne National Laboratory, Argonne National Laboratory, MassThink LLC.
In materials science and engineering it is often necessary to obtain quantitative measurements of surface topography with micrometer lateral resolution. From the measured surface, 3D topographic maps can be subsequently analyzed using a variety of software packages to extract the information that is needed. In this article we describe how white light interferometry, and optical profilometry (OP) in general, combined with generic surface analysis software, can be used for materials science and engineering tasks. In this article, a number of applications of white light interferometry for investigation of surface modifications in mass spectrometry, and wear phenomena in tribology and lubrication are demonstrated. We characterize the products of the interaction of semiconductors and metals with energetic ions (sputtering), and laser irradiation (ablation), as well as ex situ measurements of wear of tribological test specimens. Specifically, we will discuss: Aspects of traditional ion sputtering-based mass spectrometry such as sputtering rates/yields measurements on Si and Cu and subsequent time-to-depth conversion. Results of quantitative characterization of the interaction of femtosecond laser irradiation with a semiconductor surface. These results are important for applications such as ablation mass spectrometry, where the quantities of evaporated material can be studied and controlled via pulse duration and energy per pulse. Thus, by determining the crater geometry one can define depth and lateral resolution versus experimental setup conditions. Measurements of surface roughness parameters in two dimensions, and quantitative measurements of the surface wear that occur as a result of friction and wear tests. Some inherent drawbacks, possible artifacts, and uncertainty assessments of the white light interferometry approach will be discussed and explained.
Materials Science, Issue 72, Physics, Ion Beams (nuclear interactions), Light Reflection, Optical Properties, Semiconductor Materials, White Light Interferometry, Ion Sputtering, Laser Ablation, Femtosecond Lasers, Depth Profiling, Time-of-flight Mass Spectrometry, Tribology, Wear Analysis, Optical Profilometry, wear, friction, atomic force microscopy, AFM, scanning electron microscopy, SEM, imaging, visualization
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
Play Button
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Authors: Laura Ekstrand, Nikolaus Karpinsky, Yajun Wang, Song Zhang.
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation. Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7. Binary defocusing DFP methods can achieve even greater speeds8. Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9, facial animation10, cardiac mechanics studies11, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
Play Button
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Authors: Christopher J. Zopf, Narendra Maheshri.
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
Play Button
Hybrid µCT-FMT imaging and image analysis
Authors: Felix Gremse, Dennis Doleschel, Sara Zafarnia, Anne Babler, Willi Jahnen-Dechent, Twan Lammers, Wiltrud Lederle, Fabian Kiessling.
Institutions: RWTH Aachen University, RWTH Aachen University, Utrecht University.
Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints.
Bioengineering, Issue 100, Fluorescence-mediated Tomography, Computed Tomography, Image Segmentation, Multimodal Imaging, Image Analysis, Hybrid Imaging, Biodistribution, Diffuse Optical Tomography
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.