JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Quaternion-based discriminant analysis method for color face recognition.
PLoS ONE
Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition.
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Published: 08-30-2013
ABSTRACT
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
22 Related JoVE Articles!
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
A Mouse Model for Pathogen-induced Chronic Inflammation at Local and Systemic Sites
Authors: George Papadopoulos, Carolyn D. Kramer, Connie S. Slocum, Ellen O. Weinberg, Ning Hua, Cynthia V. Gudino, James A. Hamilton, Caroline A. Genco.
Institutions: Boston University School of Medicine, Boston University School of Medicine.
Chronic inflammation is a major driver of pathological tissue damage and a unifying characteristic of many chronic diseases in humans including neoplastic, autoimmune, and chronic inflammatory diseases. Emerging evidence implicates pathogen-induced chronic inflammation in the development and progression of chronic diseases with a wide variety of clinical manifestations. Due to the complex and multifactorial etiology of chronic disease, designing experiments for proof of causality and the establishment of mechanistic links is nearly impossible in humans. An advantage of using animal models is that both genetic and environmental factors that may influence the course of a particular disease can be controlled. Thus, designing relevant animal models of infection represents a key step in identifying host and pathogen specific mechanisms that contribute to chronic inflammation. Here we describe a mouse model of pathogen-induced chronic inflammation at local and systemic sites following infection with the oral pathogen Porphyromonas gingivalis, a bacterium closely associated with human periodontal disease. Oral infection of specific-pathogen free mice induces a local inflammatory response resulting in destruction of tooth supporting alveolar bone, a hallmark of periodontal disease. In an established mouse model of atherosclerosis, infection with P. gingivalis accelerates inflammatory plaque deposition within the aortic sinus and innominate artery, accompanied by activation of the vascular endothelium, an increased immune cell infiltrate, and elevated expression of inflammatory mediators within lesions. We detail methodologies for the assessment of inflammation at local and systemic sites. The use of transgenic mice and defined bacterial mutants makes this model particularly suitable for identifying both host and microbial factors involved in the initiation, progression, and outcome of disease. Additionally, the model can be used to screen for novel therapeutic strategies, including vaccination and pharmacological intervention.
Immunology, Issue 90, Pathogen-Induced Chronic Inflammation; Porphyromonas gingivalis; Oral Bone Loss; Periodontal Disease; Atherosclerosis; Chronic Inflammation; Host-Pathogen Interaction; microCT; MRI
51556
Play Button
Assessment and Evaluation of the High Risk Neonate: The NICU Network Neurobehavioral Scale
Authors: Barry M. Lester, Lynne Andreozzi-Fontaine, Edward Tronick, Rosemarie Bigsby.
Institutions: Brown University, Women & Infants Hospital of Rhode Island, University of Massachusetts, Boston.
There has been a long-standing interest in the assessment of the neurobehavioral integrity of the newborn infant. The NICU Network Neurobehavioral Scale (NNNS) was developed as an assessment for the at-risk infant. These are infants who are at increased risk for poor developmental outcome because of insults during prenatal development, such as substance exposure or prematurity or factors such as poverty, poor nutrition or lack of prenatal care that can have adverse effects on the intrauterine environment and affect the developing fetus. The NNNS assesses the full range of infant neurobehavioral performance including neurological integrity, behavioral functioning, and signs of stress/abstinence. The NNNS is a noninvasive neonatal assessment tool with demonstrated validity as a predictor, not only of medical outcomes such as cerebral palsy diagnosis, neurological abnormalities, and diseases with risks to the brain, but also of developmental outcomes such as mental and motor functioning, behavior problems, school readiness, and IQ. The NNNS can identify infants at high risk for abnormal developmental outcome and is an important clinical tool that enables medical researchers and health practitioners to identify these infants and develop intervention programs to optimize the development of these infants as early as possible. The video shows the NNNS procedures, shows examples of normal and abnormal performance and the various clinical populations in which the exam can be used.
Behavior, Issue 90, NICU Network Neurobehavioral Scale, NNNS, High risk infant, Assessment, Evaluation, Prediction, Long term outcome
3368
Play Button
Time Multiplexing Super Resolving Technique for Imaging from a Moving Platform
Authors: Asaf Ilovitsh, Shlomo Zach, Zeev Zalevsky.
Institutions: Bar-Ilan University, Kfar Saba, Israel.
We propose a method for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained in a two-step process. First, three low resolution differently defocused images are being captured and the optical phase is retrieved using an improved iterative Gerchberg-Saxton based algorithm. The phase retrieval allows to numerically back propagate the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the Synthetic Aperture Radar (SAR) in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated through laboratory experiment.
Physics, Issue 84, Superresolution, Fourier optics, Remote Sensing and Sensors, Digital Image Processing, optics, resolution
51148
Play Button
SIVQ-LCM Protocol for the ArcturusXT Instrument
Authors: Jason D. Hipp, Jerome Cheng, Jeffrey C. Hanson, Avi Z. Rosenberg, Michael R. Emmert-Buck, Michael A. Tangrea, Ulysses J. Balis.
Institutions: National Institutes of Health, University of Michigan.
SIVQ-LCM is a new methodology that automates and streamlines the more traditional, user-dependent laser dissection process. It aims to create an advanced, rapidly customizable laser dissection platform technology. In this report, we describe the integration of the image analysis software Spatially Invariant Vector Quantization (SIVQ) onto the ArcturusXT instrument. The ArcturusXT system contains both an infrared (IR) and ultraviolet (UV) laser, allowing for specific cell or large area dissections. The principal goal is to improve the speed, accuracy, and reproducibility of the laser dissection to increase sample throughput. This novel approach facilitates microdissection of both animal and human tissues in research and clinical workflows.
Bioengineering, Issue 89, SIVQ, LCM, personalized medicine, digital pathology, image analysis, ArcturusXT
51662
Play Button
Echo Particle Image Velocimetry
Authors: Nicholas DeMarchi, Christopher White.
Institutions: University of New Hampshire.
The transport of mass, momentum, and energy in fluid flows is ultimately determined by spatiotemporal distributions of the fluid velocity field.1 Consequently, a prerequisite for understanding, predicting, and controlling fluid flows is the capability to measure the velocity field with adequate spatial and temporal resolution.2 For velocity measurements in optically opaque fluids or through optically opaque geometries, echo particle image velocimetry (EPIV) is an attractive diagnostic technique to generate "instantaneous" two-dimensional fields of velocity.3,4,5,6 In this paper, the operating protocol for an EPIV system built by integrating a commercial medical ultrasound machine7 with a PC running commercial particle image velocimetry (PIV) software8 is described, and validation measurements in Hagen-Poiseuille (i.e., laminar pipe) flow are reported. For the EPIV measurements, a phased array probe connected to the medical ultrasound machine is used to generate a two-dimensional ultrasound image by pulsing the piezoelectric probe elements at different times. Each probe element transmits an ultrasound pulse into the fluid, and tracer particles in the fluid (either naturally occurring or seeded) reflect ultrasound echoes back to the probe where they are recorded. The amplitude of the reflected ultrasound waves and their time delay relative to transmission are used to create what is known as B-mode (brightness mode) two-dimensional ultrasound images. Specifically, the time delay is used to determine the position of the scatterer in the fluid and the amplitude is used to assign intensity to the scatterer. The time required to obtain a single B-mode image, t, is determined by the time it take to pulse all the elements of the phased array probe. For acquiring multiple B-mode images, the frame rate of the system in frames per second (fps) = 1/δt. (See 9 for a review of ultrasound imaging.) For a typical EPIV experiment, the frame rate is between 20-60 fps, depending on flow conditions, and 100-1000 B-mode images of the spatial distribution of the tracer particles in the flow are acquired. Once acquired, the B-mode ultrasound images are transmitted via an ethernet connection to the PC running the PIV commercial software. Using the PIV software, tracer particle displacement fields, D(x,y)[pixels], (where x and y denote horizontal and vertical spatial position in the ultrasound image, respectively) are acquired by applying cross correlation algorithms to successive ultrasound B-mode images.10 The velocity fields, u(x,y)[m/s], are determined from the displacements fields, knowing the time step between image pairs, ΔT[s], and the image magnification, M[meter/pixel], i.e., u(x,y) = MD(x,y)/ΔT. The time step between images ΔT = 1/fps + D(x,y)/B, where B[pixels/s] is the time it takes for the ultrasound probe to sweep across the image width. In the present study, M = 77[μm/pixel], fps = 49.5[1/s], and B = 25,047[pixels/s]. Once acquired, the velocity fields can be analyzed to compute flow quantities of interest.
Mechanical Engineering, Issue 70, Physics, Engineering, Physical Sciences, Ultrasound, cross correlation, velocimetry, opaque fluids, particle, flow, fluid, EPIV
4265
Play Button
Acute Dissociation of Lamprey Reticulospinal Axons to Enable Recording from the Release Face Membrane of Individual Functional Presynaptic Terminals
Authors: Shankar Ramachandran, Simon Alford.
Institutions: University of Illinois at Chicago.
Synaptic transmission is an extremely rapid process. Action potential driven influx of Ca2+ into the presynaptic terminal, through voltage-gated calcium channels (VGCCs) located in the release face membrane, is the trigger for vesicle fusion and neurotransmitter release. Crucial to the rapidity of synaptic transmission is the spatial and temporal synchrony between the arrival of the action potential, VGCCs and the neurotransmitter release machinery. The ability to directly record Ca2+ currents from the release face membrane of individual presynaptic terminals is imperative for a precise understanding of the relationship between presynaptic Ca2+ and neurotransmitter release. Access to the presynaptic release face membrane for electrophysiological recording is not available in most preparations and presynaptic Ca2+ entry has been characterized using imaging techniques and macroscopic current measurements – techniques that do not have sufficient temporal resolution to visualize Ca2+ entry. The characterization of VGCCs directly at single presynaptic terminals has not been possible in central synapses and has thus far been successfully achieved only in the calyx-type synapse of the chick ciliary ganglion and in rat calyces. We have successfully addressed this problem in the giant reticulospinal synapse of the lamprey spinal cord by developing an acutely dissociated preparation of the spinal cord that yields isolated reticulospinal axons with functional presynaptic terminals devoid of postsynaptic structures. We can fluorescently label and identify individual presynaptic terminals and target them for recording. Using this preparation, we have characterized VGCCs directly at the release face of individual presynaptic terminals using immunohistochemistry and electrophysiology approaches. Ca2+ currents have been recorded directly at the release face membrane of individual presynaptic terminals, the first such recording to be carried out at central synapses.
Neuroscience, Issue 92, reticulospinal synapse, reticulospinal axons, presynaptic terminal, presynaptic calcium, voltage-gated calcium channels, vesicle fusion, synaptic transmission, neurotransmitter release, spinal cord, lamprey, synaptic vesicles, acute dissociation
51925
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Authors: Christopher J. Zopf, Narendra Maheshri.
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
50456
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
3358
Play Button
Quantification of Orofacial Phenotypes in Xenopus
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Institutions: Virginia Commonwealth University.
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
Developmental Biology, Issue 93, Orofacial quantification, geometric morphometrics, Xenopus, orofacial development, orofacial defects, shape changes, facial dimensions
52062
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
50319
Play Button
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Authors: Jason Bell, Edwin Dickinson, David R. Badcock, Frederick A. A. Kingdom.
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
50877
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Authors: Tadd T. Truscott, Jesse Belden, Joseph R. Nielson, David J. Daily, Scott L. Thomson.
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
4325
Play Button
Measuring Spatially- and Directionally-varying Light Scattering from Biological Material
Authors: Todd Alan Harvey, Kimberly S. Bostwick, Steve Marschner.
Institutions: Cornell University, Cornell University, Cornell University Museum of Vertebrates, Cornell University.
Light interacts with an organism's integument on a variety of spatial scales. For example in an iridescent bird: nano-scale structures produce color; the milli-scale structure of barbs and barbules largely determines the directional pattern of reflected light; and through the macro-scale spatial structure of overlapping, curved feathers, these directional effects create the visual texture. Milli-scale and macro-scale effects determine where on the organism's body, and from what viewpoints and under what illumination, the iridescent colors are seen. Thus, the highly directional flash of brilliant color from the iridescent throat of a hummingbird is inadequately explained by its nano-scale structure alone and questions remain. From a given observation point, which milli-scale elements of the feather are oriented to reflect strongly? Do some species produce broader "windows" for observation of iridescence than others? These and similar questions may be asked about any organisms that have evolved a particular surface appearance for signaling, camouflage, or other reasons. In order to study the directional patterns of light scattering from feathers, and their relationship to the bird's milli-scale morphology, we developed a protocol for measuring light scattered from biological materials using many high-resolution photographs taken with varying illumination and viewing directions. Since we measure scattered light as a function of direction, we can observe the characteristic features in the directional distribution of light scattered from that particular feather, and because barbs and barbules are resolved in our images, we can clearly attribute the directional features to these different milli-scale structures. Keeping the specimen intact preserves the gross-scale scattering behavior seen in nature. The method described here presents a generalized protocol for analyzing spatially- and directionally-varying light scattering from complex biological materials at multiple structural scales.
Biophysics, Issue 75, Molecular Biology, Biomedical Engineering, Physics, Computer Science, surface properties (nonmetallic materials), optical imaging devices (design and techniques), optical measuring instruments (design and techniques), light scattering, optical materials, optical properties, Optics, feathers, light scattering, reflectance, transmittance, color, iridescence, specular, diffuse, goniometer, C. cupreus, imaging, visualization
50254
Play Button
Tomato Analyzer: A Useful Software Application to Collect Accurate and Detailed Morphological and Colorimetric Data from Two-dimensional Objects
Authors: Gustavo R. Rodríguez, Jennifer B. Moyseenko, Matthew D. Robbins, Nancy Huarachi Morejón, David M. Francis, Esther van der Knaap.
Institutions: The Ohio State University.
Measuring fruit morphology and color traits of vegetable and fruit crops in an objective and reproducible way is important for detailed phenotypic analyses of these traits. Tomato Analyzer (TA) is a software program that measures 37 attributes related to two-dimensional shape in a semi-automatic and reproducible manner1,2. Many of these attributes, such as angles at the distal and proximal ends of the fruit and areas of indentation, are difficult to quantify manually. The attributes are organized in ten categories within the software: Basic Measurement, Fruit Shape Index, Blockiness, Homogeneity, Proximal Fruit End Shape, Distal Fruit End Shape, Asymmetry, Internal Eccentricity, Latitudinal Section and Morphometrics. The last category requires neither prior knowledge nor predetermined notions of the shape attributes, so morphometric analysis offers an unbiased option that may be better adapted to high-throughput analyses than attribute analysis. TA also offers the Color Test application that was designed to collect color measurements from scanned images and allow scanning devices to be calibrated using color standards3. TA provides several options to export and analyze shape attribute, morphometric, and color data. The data may be exported to an excel file in batch mode (more than 100 images at one time) or exported as individual images. The user can choose between output that displays the average for each attribute for the objects in each image (including standard deviation), or an output that displays the attribute values for each object on the image. TA has been a valuable and effective tool for indentifying and confirming tomato fruit shape Quantitative Trait Loci (QTL), as well as performing in-depth analyses of the effect of key fruit shape genes on plant morphology. Also, TA can be used to objectively classify fruit into various shape categories. Lastly, fruit shape and color traits in other plant species as well as other plant organs such as leaves and seeds can be evaluated with TA.
Plant Biology, Issue 37, morphology, color, image processing, quantitative trait loci, software
1856
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
3307
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.