JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Visualization of genomic changes by segmented smoothing using an L0 penalty.
PLoS ONE
Copy number variations (CNV) and allelic imbalance in tumor tissue can show strong segmentation. Their graphical presentation can be enhanced by appropriate smoothing. Existing signal and scatterplot smoothers do not respect segmentation well. We present novel algorithms that use a penalty on the L(0) norm of differences of neighboring values. Visualization is our main goal, but we compare classification performance to that of VEGA.
ABSTRACT
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
24 Related JoVE Articles!
Play Button
Dual-phase Cone-beam Computed Tomography to See, Reach, and Treat Hepatocellular Carcinoma during Drug-eluting Beads Transarterial Chemo-embolization
Authors: Vania Tacher, MingDe Lin, Nikhil Bhagat, Nadine Abi Jaoudeh, Alessandro Radaelli, Niels Noordhoek, Bart Carelsen, Bradford J. Wood, Jean-François Geschwind.
Institutions: The Johns Hopkins Hospital, Philips Research North America, National Institutes of Health, Philips Healthcare.
The advent of cone-beam computed tomography (CBCT) in the angiography suite has been revolutionary in interventional radiology. CBCT offers 3 dimensional (3D) diagnostic imaging in the interventional suite and can enhance minimally-invasive therapy beyond the limitations of 2D angiography alone. The role of CBCT has been recognized in transarterial chemo-embolization (TACE) treatment of hepatocellular carcinoma (HCC). The recent introduction of a CBCT technique: dual-phase CBCT (DP-CBCT) improves intra-arterial HCC treatment with drug-eluting beads (DEB-TACE). DP-CBCT can be used to localize liver tumors with the diagnostic accuracy of multi-phasic multidetector computed tomography (M-MDCT) and contrast enhanced magnetic resonance imaging (CE-MRI) (See the tumor), to guide intra-arterially guidewire and microcatheter to the desired location for selective therapy (Reach the tumor), and to evaluate treatment success during the procedure (Treat the tumor). The purpose of this manuscript is to illustrate how DP-CBCT is used in DEB-TACE to see, reach, and treat HCC.
Medicine, Issue 82, Carcinoma, Hepatocellular, Tomography, X-Ray Computed, Surgical Procedures, Minimally Invasive, Digestive System Diseases, Diagnosis, Therapeutics, Surgical Procedures, Operative, Equipment and Supplies, Transarterial chemo-embolization, Hepatocellular carcinoma, Dual-phase cone-beam computed tomography, 3D roadmap, Drug-Eluting Beads
50795
Play Button
Rapid Analysis and Exploration of Fluorescence Microscopy Images
Authors: Benjamin Pavie, Satwik Rajaram, Austin Ouyang, Jason M. Altschuler, Robert J. Steininger III, Lani F. Wu, Steven J. Altschuler.
Institutions: UT Southwestern Medical Center, UT Southwestern Medical Center, Princeton University.
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
Basic Protocol, Issue 85, PhenoRipper, fluorescence microscopy, image analysis, High-content analysis, high-throughput screening, Open-source, Phenotype
51280
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
Preparation of Segmented Microtubules to Study Motions Driven by the Disassembling Microtubule Ends
Authors: Vladimir A. Volkov, Anatoly V. Zaytsev, Ekaterina L. Grishchuk.
Institutions: Russian Academy of Sciences, Federal Research Center of Pediatric Hematology, Oncology and Immunology, Moscow, Russia, University of Pennsylvania.
Microtubule depolymerization can provide force to transport different protein complexes and protein-coated beads in vitro. The underlying mechanisms are thought to play a vital role in the microtubule-dependent chromosome motions during cell division, but the relevant proteins and their exact roles are ill-defined. Thus, there is a growing need to develop assays with which to study such motility in vitro using purified components and defined biochemical milieu. Microtubules, however, are inherently unstable polymers; their switching between growth and shortening is stochastic and difficult to control. The protocols we describe here take advantage of the segmented microtubules that are made with the photoablatable stabilizing caps. Depolymerization of such segmented microtubules can be triggered with high temporal and spatial resolution, thereby assisting studies of motility at the disassembling microtubule ends. This technique can be used to carry out a quantitative analysis of the number of molecules in the fluorescently-labeled protein complexes, which move processively with dynamic microtubule ends. To optimize a signal-to-noise ratio in this and other quantitative fluorescent assays, coverslips should be treated to reduce nonspecific absorption of soluble fluorescently-labeled proteins. Detailed protocols are provided to take into account the unevenness of fluorescent illumination, and determine the intensity of a single fluorophore using equidistant Gaussian fit. Finally, we describe the use of segmented microtubules to study microtubule-dependent motions of the protein-coated microbeads, providing insights into the ability of different motor and nonmotor proteins to couple microtubule depolymerization to processive cargo motion.
Basic Protocol, Issue 85, microscopy flow chamber, single-molecule fluorescence, laser trap, microtubule-binding protein, microtubule-dependent motor, microtubule tip-tracking
51150
Play Button
Preparation and Use of Photocatalytically Active Segmented Ag|ZnO and Coaxial TiO2-Ag Nanowires Made by Templated Electrodeposition
Authors: A. Wouter Maijenburg, Eddy J.B. Rodijk, Michiel G. Maas, Johan E. ten Elshof.
Institutions: University of Twente.
Photocatalytically active nanostructures require a large specific surface area with the presence of many catalytically active sites for the oxidation and reduction half reactions, and fast electron (hole) diffusion and charge separation. Nanowires present suitable architectures to meet these requirements. Axially segmented Ag|ZnO and radially segmented (coaxial) TiO2-Ag nanowires with a diameter of 200 nm and a length of 6-20 µm were made by templated electrodeposition within the pores of polycarbonate track-etched (PCTE) or anodized aluminum oxide (AAO) membranes, respectively. In the photocatalytic experiments, the ZnO and TiO2 phases acted as photoanodes, and Ag as cathode. No external circuit is needed to connect both electrodes, which is a key advantage over conventional photo-electrochemical cells. For making segmented Ag|ZnO nanowires, the Ag salt electrolyte was replaced after formation of the Ag segment to form a ZnO segment attached to the Ag segment. For making coaxial TiO2-Ag nanowires, a TiO2 gel was first formed by the electrochemically induced sol-gel method. Drying and thermal annealing of the as-formed TiO2 gel resulted in the formation of crystalline TiO2 nanotubes. A subsequent Ag electrodeposition step inside the TiO2 nanotubes resulted in formation of coaxial TiO2-Ag nanowires. Due to the combination of an n-type semiconductor (ZnO or TiO2) and a metal (Ag) within the same nanowire, a Schottky barrier was created at the interface between the phases. To demonstrate the photocatalytic activity of these nanowires, the Ag|ZnO nanowires were used in a photocatalytic experiment in which H2 gas was detected upon UV illumination of the nanowires dispersed in a methanol/water mixture. After 17 min of illumination, approximately 0.2 vol% H2 gas was detected from a suspension of ~0.1 g of Ag|ZnO nanowires in a 50 ml 80 vol% aqueous methanol solution.
Physics, Issue 87, Multicomponent nanowires, electrochemistry, sol-gel processes, photocatalysis, photochemistry, H2 evolution
51547
Play Button
3D Printing of Preclinical X-ray Computed Tomographic Data Sets
Authors: Evan Doney, Lauren A. Krumdick, Justin M. Diener, Connor A. Wathen, Sarah E. Chapman, Brian Stamile, Jeremiah E. Scott, Matthew J. Ravosa, Tony Van Avermaete, W. Matthew Leevy.
Institutions: University of Notre Dame , University of Notre Dame, University of Notre Dame , University of Notre Dame , MakerBot Industries LLC, University of Notre Dame , University of Notre Dame .
Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.1 However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.2 These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. 3, 4 The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.
Medicine, Issue 73, Anatomy, Physiology, Molecular Biology, Biomedical Engineering, Bioengineering, Chemistry, Biochemistry, Materials Science, Engineering, Manufactured Materials, Technology, Animal Structures, Life Sciences (General), 3D printing, X-ray Computed Tomography, CT, CT scans, data extrusion, additive printing, in vivo imaging, clinical techniques, imaging
50250
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Rescue of Recombinant Newcastle Disease Virus from cDNA
Authors: Juan Ayllon, Adolfo García-Sastre, Luis Martínez-Sobrido.
Institutions: Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, Icahn School of Medicine at Mount Sinai, University of Rochester.
Newcastle disease virus (NDV), the prototype member of the Avulavirus genus of the family Paramyxoviridae1, is a non-segmented, negative-sense, single-stranded, enveloped RNA virus (Figure 1) with potential applications as a vector for vaccination and treatment of human diseases. In-depth exploration of these applications has only become possible after the establishment of reverse genetics techniques to rescue recombinant viruses from plasmids encoding their complete genomes as cDNA2-5. Viral cDNA can be conveniently modified in vitro by using standard cloning procedures to alter the genotype of the virus and/or to include new transcriptional units. Rescue of such genetically modified viruses provides a valuable tool to understand factors affecting multiple stages of infection, as well as allows for the development and improvement of vectors for the expression and delivery of antigens for vaccination and therapy. Here we describe a protocol for the rescue of recombinant NDVs.
Immunology, Issue 80, Paramyxoviridae, Vaccines, Oncolytic Virotherapy, Immunity, Innate, Newcastle disease virus (NDV), MVA-T7, reverse genetics techniques, plasmid transfection, recombinant virus, HA assay
50830
Play Button
Reconstruction of 3-Dimensional Histology Volume and its Application to Study Mouse Mammary Glands
Authors: Rushin Shojaii, Stephanie Bacopulos, Wenyi Yang, Tigran Karavardanyan, Demetri Spyropoulos, Afshin Raouf, Anne Martel, Arun Seth.
Institutions: University of Toronto, Sunnybrook Research Institute, University of Toronto, Sunnybrook Research Institute, Medical University of South Carolina, University of Manitoba.
Histology volume reconstruction facilitates the study of 3D shape and volume change of an organ at the level of macrostructures made up of cells. It can also be used to investigate and validate novel techniques and algorithms in volumetric medical imaging and therapies. Creating 3D high-resolution atlases of different organs1,2,3 is another application of histology volume reconstruction. This provides a resource for investigating tissue structures and the spatial relationship between various cellular features. We present an image registration approach for histology volume reconstruction, which uses a set of optical blockface images. The reconstructed histology volume represents a reliable shape of the processed specimen with no propagated post-processing registration error. The Hematoxylin and Eosin (H&E) stained sections of two mouse mammary glands were registered to their corresponding blockface images using boundary points extracted from the edges of the specimen in histology and blockface images. The accuracy of the registration was visually evaluated. The alignment of the macrostructures of the mammary glands was also visually assessed at high resolution. This study delineates the different steps of this image registration pipeline, ranging from excision of the mammary gland through to 3D histology volume reconstruction. While 2D histology images reveal the structural differences between pairs of sections, 3D histology volume provides the ability to visualize the differences in shape and volume of the mammary glands.
Bioengineering, Issue 89, Histology Volume Reconstruction, Transgenic Mouse Model, Image Registration, Digital Histology, Image Processing, Mouse Mammary Gland
51325
Play Button
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Authors: Cila Herman, Muge Pirtini Cetingul.
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4. Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8 several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9, allowed breakthroughs in the field. In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
2679
Play Button
In vivo Imaging of Tumor Angiogenesis using Fluorescence Confocal Videomicroscopy
Authors: Victor Fitoussi, Nathalie Faye, Foucauld Chamming's, Olivier Clement, Charles-Andre Cuenod, Laure S. Fournier.
Institutions: Université Paris Descartes Sorbonne Paris Cité, INSERM UMR-S970, Hôpital Européen Georges Pompidou, Service de Radiologie.
Fibered confocal fluorescence in vivo imaging with a fiber optic bundle uses the same principle as fluorescent confocal microscopy. It can excite fluorescent in situ elements through the optical fibers, and then record some of the emitted photons, via the same optical fibers. The light source is a laser that sends the exciting light through an element within the fiber bundle and as it scans over the sample, recreates an image pixel by pixel. As this scan is very fast, by combining it with dedicated image processing software, images in real time with a frequency of 12 frames/sec can be obtained. We developed a technique to quantitatively characterize capillary morphology and function, using a confocal fluorescence videomicroscopy device. The first step in our experiment was to record 5 sec movies in the four quadrants of the tumor to visualize the capillary network. All movies were processed using software (ImageCell, Mauna Kea Technology, Paris France) that performs an automated segmentation of vessels around a chosen diameter (10 μm in our case). Thus, we could quantify the 'functional capillary density', which is the ratio between the total vessel area and the total area of the image. This parameter was a surrogate marker for microvascular density, usually measured using pathology tools. The second step was to record movies of the tumor over 20 min to quantify leakage of the macromolecular contrast agent through the capillary wall into the interstitium. By measuring the ratio of signal intensity in the interstitium over that in the vessels, an 'index leakage' was obtained, acting as a surrogate marker for capillary permeability.
Medicine, Issue 79, Cancer, Biological, Microcirculation, optical imaging devices (design and techniques), Confocal videomicroscopy, microcirculation, capillary leakage, FITC-Dextran, angiogenesis
50347
Play Button
Visualization of ATP Synthase Dimers in Mitochondria by Electron Cryo-tomography
Authors: Karen M. Davies, Bertram Daum, Vicki A. M. Gold, Alexander W. Mühleip, Tobias Brandt, Thorsten B. Blum, Deryck J. Mills, Werner Kühlbrandt.
Institutions: Max Planck Institute of Biophysics.
Electron cryo-tomography is a powerful tool in structural biology, capable of visualizing the three-dimensional structure of biological samples, such as cells, organelles, membrane vesicles, or viruses at molecular detail. To achieve this, the aqueous sample is rapidly vitrified in liquid ethane, which preserves it in a close-to-native, frozen-hydrated state. In the electron microscope, tilt series are recorded at liquid nitrogen temperature, from which 3D tomograms are reconstructed. The signal-to-noise ratio of the tomographic volume is inherently low. Recognizable, recurring features are enhanced by subtomogram averaging, by which individual subvolumes are cut out, aligned and averaged to reduce noise. In this way, 3D maps with a resolution of 2 nm or better can be obtained. A fit of available high-resolution structures to the 3D volume then produces atomic models of protein complexes in their native environment. Here we show how we use electron cryo-tomography to study the in situ organization of large membrane protein complexes in mitochondria. We find that ATP synthases are organized in rows of dimers along highly curved apices of the inner membrane cristae, whereas complex I is randomly distributed in the membrane regions on either side of the rows. By subtomogram averaging we obtained a structure of the mitochondrial ATP synthase dimer within the cristae membrane.
Structural Biology, Issue 91, electron microscopy, electron cryo-tomography, mitochondria, ultrastructure, membrane structure, membrane protein complexes, ATP synthase, energy conversion, bioenergetics
51228
Play Button
A Comprehensive Protocol for Manual Segmentation of the Medial Temporal Lobe Structures
Authors: Matthew Moore, Yifan Hu, Sarah Woo, Dylan O'Hearn, Alexandru D. Iordan, Sanda Dolcos, Florin Dolcos.
Institutions: University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign.
The present paper describes a comprehensive protocol for manual tracing of the set of brain regions comprising the medial temporal lobe (MTL): amygdala, hippocampus, and the associated parahippocampal regions (perirhinal, entorhinal, and parahippocampal proper). Unlike most other tracing protocols available, typically focusing on certain MTL areas (e.g., amygdala and/or hippocampus), the integrative perspective adopted by the present tracing guidelines allows for clear localization of all MTL subregions. By integrating information from a variety of sources, including extant tracing protocols separately targeting various MTL structures, histological reports, and brain atlases, and with the complement of illustrative visual materials, the present protocol provides an accurate, intuitive, and convenient guide for understanding the MTL anatomy. The need for such tracing guidelines is also emphasized by illustrating possible differences between automatic and manual segmentation protocols. This knowledge can be applied toward research involving not only structural MRI investigations but also structural-functional colocalization and fMRI signal extraction from anatomically defined ROIs, in healthy and clinical groups alike.
Neuroscience, Issue 89, Anatomy, Segmentation, Medial Temporal Lobe, MRI, Manual Tracing, Amygdala, Hippocampus, Perirhinal Cortex, Entorhinal Cortex, Parahippocampal Cortex
50991
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
50887
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
3358
Play Button
Segmentation and Measurement of Fat Volumes in Murine Obesity Models Using X-ray Computed Tomography
Authors: Todd A. Sasser, Sarah E. Chapman, Shengting Li, Caroline Hudson, Sean P. Orton, Justin M. Diener, Seth T. Gammon, Carlos Correcher, W. Matthew Leevy.
Institutions: Carestream Molecular Imaging , University of Notre Dame , University of Notre Dame , Oncovision, GEM-Imaging S.A..
Obesity is associated with increased morbidity and mortality as well as reduced metrics in quality of life.1 Both environmental and genetic factors are associated with obesity, though the precise underlying mechanisms that contribute to the disease are currently being delineated.2,3 Several small animal models of obesity have been developed and are employed in a variety of studies.4 A critical component to these experiments involves the collection of regional and/or total animal fat content data under varied conditions. Traditional experimental methods available for measuring fat content in small animal models of obesity include invasive (e.g. ex vivo measurement of fat deposits) and non-invasive (e.g. Dual Energy X-ray Absorptiometry (DEXA), or Magnetic Resonance (MR)) protocols, each of which presents relative trade-offs. Current invasive methods for measuring fat content may provide details for organ and region specific fat distribution, but sacrificing the subjects will preclude longitudinal assessments. Conversely, current non-invasive strategies provide limited details for organ and region specific fat distribution, but enable valuable longitudinal assessment. With the advent of dedicated small animal X-ray computed tomography (CT) systems and customized analytical procedures, both organ and region specific analysis of fat distribution and longitudinal profiling may be possible. Recent reports have validated the use of CT for in vivo longitudinal imaging of adiposity in living mice.5,6 Here we provide a modified method that allows for fat/total volume measurement, analysis and visualization utilizing the Carestream Molecular Imaging Albira CT system in conjunction with PMOD and Volview software packages.
Medicine, Issue 62, X-ray computed tomography (CT), image analysis, in vivo, obesity, metabolic disorders
3680
Play Button
High-resolution Functional Magnetic Resonance Imaging Methods for Human Midbrain
Authors: Sucharit Katyal, Clint A. Greene, David Ress.
Institutions: The University of Texas at Austin.
Functional MRI (fMRI) is a widely used tool for non-invasively measuring correlates of human brain activity. However, its use has mostly been focused upon measuring activity on the surface of cerebral cortex rather than in subcortical regions such as midbrain and brainstem. Subcortical fMRI must overcome two challenges: spatial resolution and physiological noise. Here we describe an optimized set of techniques developed to perform high-resolution fMRI in human SC, a structure on the dorsal surface of the midbrain; the methods can also be used to image other brainstem and subcortical structures. High-resolution (1.2 mm voxels) fMRI of the SC requires a non-conventional approach. The desired spatial sampling is obtained using a multi-shot (interleaved) spiral acquisition1. Since, T2* of SC tissue is longer than in cortex, a correspondingly longer echo time (TE ~ 40 msec) is used to maximize functional contrast. To cover the full extent of the SC, 8-10 slices are obtained. For each session a structural anatomy with the same slice prescription as the fMRI is also obtained, which is used to align the functional data to a high-resolution reference volume. In a separate session, for each subject, we create a high-resolution (0.7 mm sampling) reference volume using a T1-weighted sequence that gives good tissue contrast. In the reference volume, the midbrain region is segmented using the ITK-SNAP software application2. This segmentation is used to create a 3D surface representation of the midbrain that is both smooth and accurate3. The surface vertices and normals are used to create a map of depth from the midbrain surface within the tissue4. Functional data is transformed into the coordinate system of the segmented reference volume. Depth associations of the voxels enable the averaging of fMRI time series data within specified depth ranges to improve signal quality. Data is rendered on the 3D surface for visualization. In our lab we use this technique for measuring topographic maps of visual stimulation and covert and overt visual attention within the SC1. As an example, we demonstrate the topographic representation of polar angle to visual stimulation in SC.
Neuroscience, Issue 63, fMRI, midbrain, brainstem, colliculus, BOLD, brain, Magentic Resonance Imaging, MRI
3746
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.