JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Monitoring powdery mildew of winter wheat by using moderate resolution multi-temporal satellite imagery.
PLoS ONE
PUBLISHED: 01-01-2014
Powdery mildew is one of the most serious diseases that have a significant impact on the production of winter wheat. As an effective alternative to traditional sampling methods, remote sensing can be a useful tool in disease detection. This study attempted to use multi-temporal moderate resolution satellite-based data of surface reflectances in blue (B), green (G), red (R) and near infrared (NIR) bands from HJ-CCD (CCD sensor on Huanjing satellite) to monitor disease at a regional scale. In a suburban area in Beijing, China, an extensive field campaign for disease intensity survey was conducted at key growth stages of winter wheat in 2010. Meanwhile, corresponding time series of HJ-CCD images were acquired over the study area. In this study, a number of single-stage and multi-stage spectral features, which were sensitive to powdery mildew, were selected by using an independent t-test. With the selected spectral features, four advanced methods: mahalanobis distance, maximum likelihood classifier, partial least square regression and mixture tuned matched filtering were tested and evaluated for their performances in disease mapping. The experimental results showed that all four algorithms could generate disease maps with a generally correct distribution pattern of powdery mildew at the grain filling stage (Zadoks 72). However, by comparing these disease maps with ground survey data (validation samples), all of the four algorithms also produced a variable degree of error in estimating the disease occurrence and severity. Further, we found that the integration of MTMF and PLSR algorithms could result in a significant accuracy improvement of identifying and determining the disease intensity (overall accuracy of 72% increased to 78% and kappa coefficient of 0.49 increased to 0.59). The experimental results also demonstrated that the multi-temporal satellite images have a great potential in crop diseases mapping at a regional scale.
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Published: 12-09-2013
ABSTRACT
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
28 Related JoVE Articles!
Play Button
Dual-mode Imaging of Cutaneous Tissue Oxygenation and Vascular Function
Authors: Ronald X. Xu, Kun Huang, Ruogu Qin, Jiwei Huang, Jeff S. Xu, Liya Ding, Urmila S. Gnyawali, Gayle M. Gordillo, Surya C. Gnyawali, Chandan K. Sen.
Institutions: The Ohio State University, The Ohio State University, The Ohio State University, The Ohio State University.
Accurate assessment of cutaneous tissue oxygenation and vascular function is important for appropriate detection, staging, and treatment of many health disorders such as chronic wounds. We report the development of a dual-mode imaging system for non-invasive and non-contact imaging of cutaneous tissue oxygenation and vascular function. The imaging system integrated an infrared camera, a CCD camera, a liquid crystal tunable filter and a high intensity fiber light source. A Labview interface was programmed for equipment control, synchronization, image acquisition, processing, and visualization. Multispectral images captured by the CCD camera were used to reconstruct the tissue oxygenation map. Dynamic thermographic images captured by the infrared camera were used to reconstruct the vascular function map. Cutaneous tissue oxygenation and vascular function images were co-registered through fiduciary markers. The performance characteristics of the dual-mode image system were tested in humans.
Medicine, Issue 46, Dual-mode, multispectral imaging, infrared imaging, cutaneous tissue oxygenation, vascular function, co-registration, wound healing
2095
Play Button
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Authors: Cila Herman, Muge Pirtini Cetingul.
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4. Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8 several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9, allowed breakthroughs in the field. In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
2679
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Quantitative Optical Microscopy: Measurement of Cellular Biophysical Features with a Standard Optical Microscope
Authors: Kevin G. Phillips, Sandra M. Baker-Groberg, Owen J.T. McCarty.
Institutions: Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine.
We describe the use of a standard optical microscope to perform quantitative measurements of mass, volume, and density on cellular specimens through a combination of bright field and differential interference contrast imagery. Two primary approaches are presented: noninterferometric quantitative phase microscopy (NIQPM), to perform measurements of total cell mass and subcellular density distribution, and Hilbert transform differential interference contrast microscopy (HTDIC) to determine volume. NIQPM is based on a simplified model of wave propagation, termed the paraxial approximation, with three underlying assumptions: low numerical aperture (NA) illumination, weak scattering, and weak absorption of light by the specimen. Fortunately, unstained cellular specimens satisfy these assumptions and low NA illumination is easily achieved on commercial microscopes. HTDIC is used to obtain volumetric information from through-focus DIC imagery under high NA illumination conditions. High NA illumination enables enhanced sectioning of the specimen along the optical axis. Hilbert transform processing on the DIC image stacks greatly enhances edge detection algorithms for localization of the specimen borders in three dimensions by separating the gray values of the specimen intensity from those of the background. The primary advantages of NIQPM and HTDIC lay in their technological accessibility using “off-the-shelf” microscopes. There are two basic limitations of these methods: slow z-stack acquisition time on commercial scopes currently abrogates the investigation of phenomena faster than 1 frame/minute, and secondly, diffraction effects restrict the utility of NIQPM and HTDIC to objects from 0.2 up to 10 (NIQPM) and 20 (HTDIC) μm in diameter, respectively. Hence, the specimen and its associated time dynamics of interest must meet certain size and temporal constraints to enable the use of these methods. Excitingly, most fixed cellular specimens are readily investigated with these methods.
Bioengineering, Issue 86, Label-free optics, quantitative microscopy, cellular biophysics, cell mass, cell volume, cell density
50988
Play Button
Isolation, Culture, and Transplantation of Muscle Satellite Cells
Authors: Norio Motohashi, Yoko Asakura, Atsushi Asakura.
Institutions: University of Minnesota Medical School.
Muscle satellite cells are a stem cell population required for postnatal skeletal muscle development and regeneration, accounting for 2-5% of sublaminal nuclei in muscle fibers. In adult muscle, satellite cells are normally mitotically quiescent. Following injury, however, satellite cells initiate cellular proliferation to produce myoblasts, their progenies, to mediate the regeneration of muscle. Transplantation of satellite cell-derived myoblasts has been widely studied as a possible therapy for several regenerative diseases including muscular dystrophy, heart failure, and urological dysfunction. Myoblast transplantation into dystrophic skeletal muscle, infarcted heart, and dysfunctioning urinary ducts has shown that engrafted myoblasts can differentiate into muscle fibers in the host tissues and display partial functional improvement in these diseases. Therefore, the development of efficient purification methods of quiescent satellite cells from skeletal muscle, as well as the establishment of satellite cell-derived myoblast cultures and transplantation methods for myoblasts, are essential for understanding the molecular mechanisms behind satellite cell self-renewal, activation, and differentiation. Additionally, the development of cell-based therapies for muscular dystrophy and other regenerative diseases are also dependent upon these factors. However, current prospective purification methods of quiescent satellite cells require the use of expensive fluorescence-activated cell sorting (FACS) machines. Here, we present a new method for the rapid, economical, and reliable purification of quiescent satellite cells from adult mouse skeletal muscle by enzymatic dissociation followed by magnetic-activated cell sorting (MACS). Following isolation of pure quiescent satellite cells, these cells can be cultured to obtain large numbers of myoblasts after several passages. These freshly isolated quiescent satellite cells or ex vivo expanded myoblasts can be transplanted into cardiotoxin (CTX)-induced regenerating mouse skeletal muscle to examine the contribution of donor-derived cells to regenerating muscle fibers, as well as to satellite cell compartments for the examination of self-renewal activities.
Cellular Biology, Issue 86, skeletal muscle, muscle stem cell, satellite cell, regeneration, myoblast transplantation, muscular dystrophy, self-renewal, differentiation, myogenesis
50846
Play Button
Evaluation of Integrated Anaerobic Digestion and Hydrothermal Carbonization for Bioenergy Production
Authors: M. Toufiq Reza, Maja Werner, Marcel Pohl, Jan Mumme.
Institutions: Leibniz Institute for Agricultural Engineering.
Lignocellulosic biomass is one of the most abundant yet underutilized renewable energy resources. Both anaerobic digestion (AD) and hydrothermal carbonization (HTC) are promising technologies for bioenergy production from biomass in terms of biogas and HTC biochar, respectively. In this study, the combination of AD and HTC is proposed to increase overall bioenergy production. Wheat straw was anaerobically digested in a novel upflow anaerobic solid state reactor (UASS) in both mesophilic (37 °C) and thermophilic (55 °C) conditions. Wet digested from thermophilic AD was hydrothermally carbonized at 230 °C for 6 hr for HTC biochar production. At thermophilic temperature, the UASS system yields an average of 165 LCH4/kgVS (VS: volatile solids) and 121 L CH4/kgVS at mesophilic AD over the continuous operation of 200 days. Meanwhile, 43.4 g of HTC biochar with 29.6 MJ/kgdry_biochar was obtained from HTC of 1 kg digestate (dry basis) from mesophilic AD. The combination of AD and HTC, in this particular set of experiment yield 13.2 MJ of energy per 1 kg of dry wheat straw, which is at least 20% higher than HTC alone and 60.2% higher than AD only.
Environmental Sciences, Issue 88, Biomethane, Hydrothermal Carbonization (HTC), Calorific Value, Lignocellulosic Biomass, UASS, Anaerobic Digestion
51734
Play Button
Simultaneous Long-term Recordings at Two Neuronal Processing Stages in Behaving Honeybees
Authors: Martin Fritz Brill, Maren Reuter, Wolfgang Rössler, Martin Fritz Strube-Bloss.
Institutions: University of Würzburg.
In both mammals and insects neuronal information is processed in different higher and lower order brain centers. These centers are coupled via convergent and divergent anatomical connections including feed forward and feedback wiring. Furthermore, information of the same origin is partially sent via parallel pathways to different and sometimes into the same brain areas. To understand the evolutionary benefits as well as the computational advantages of these wiring strategies and especially their temporal dependencies on each other, it is necessary to have simultaneous access to single neurons of different tracts or neuropiles in the same preparation at high temporal resolution. Here we concentrate on honeybees by demonstrating a unique extracellular long term access to record multi unit activity at two subsequent neuropiles1, the antennal lobe (AL), the first olfactory processing stage and the mushroom body (MB), a higher order integration center involved in learning and memory formation, or two parallel neuronal tracts2 connecting the AL with the MB. The latter was chosen as an example and will be described in full. In the supporting video the construction and permanent insertion of flexible multi channel wire electrodes is demonstrated. Pairwise differential amplification of the micro wire electrode channels drastically reduces the noise and verifies that the source of the signal is closely related to the position of the electrode tip. The mechanical flexibility of the used wire electrodes allows stable invasive long term recordings over many hours up to days, which is a clear advantage compared to conventional extra and intracellular in vivo recording techniques.
Neuroscience, Issue 89, honeybee brain, olfaction, extracellular long term recordings, double recordings, differential wire electrodes, single unit, multi-unit recordings
51750
Play Button
Adult and Embryonic Skeletal Muscle Microexplant Culture and Isolation of Skeletal Muscle Stem Cells
Authors: Deborah Merrick, Hung-Chih Chen, Dean Larner, Janet Smith.
Institutions: University of Birmingham.
Cultured embryonic and adult skeletal muscle cells have a number of different uses. The micro-dissected explants technique described in this chapter is a robust and reliable method for isolating relatively large numbers of proliferative skeletal muscle cells from juvenile, adult or embryonic muscles as a source of skeletal muscle stem cells. The authors have used micro-dissected explant cultures to analyse the growth characteristics of skeletal muscle cells in wild-type and dystrophic muscles. Each of the components of tissue growth, namely cell survival, proliferation, senescence and differentiation can be analysed separately using the methods described here. The net effect of all components of growth can be established by means of measuring explant outgrowth rates. The micro-explant method can be used to establish primary cultures from a wide range of different muscle types and ages and, as described here, has been adapted by the authors to enable the isolation of embryonic skeletal muscle precursors. Uniquely, micro-explant cultures have been used to derive clonal (single cell origin) skeletal muscle stem cell (SMSc) lines which can be expanded and used for in vivo transplantation. In vivo transplanted SMSc behave as functional, tissue-specific, satellite cells which contribute to skeletal muscle fibre regeneration but which are also retained (in the satellite cell niche) as a small pool of undifferentiated stem cells which can be re-isolated into culture using the micro-explant method.
Cellular Biology, Issue 43, Skeletal muscle stem cell, embryonic tissue culture, apoptosis, growth factor, proliferation, myoblast, myogenesis, satellite cell, skeletal muscle differentiation, muscular dystrophy
2051
Play Button
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
51617
Play Button
Preparation of Primary Myogenic Precursor Cell/Myoblast Cultures from Basal Vertebrate Lineages
Authors: Jacob Michael Froehlich, Iban Seiliez, Jean-Charles Gabillard, Peggy R. Biga.
Institutions: University of Alabama at Birmingham, INRA UR1067, INRA UR1037.
Due to the inherent difficulty and time involved with studying the myogenic program in vivo, primary culture systems derived from the resident adult stem cells of skeletal muscle, the myogenic precursor cells (MPCs), have proven indispensible to our understanding of mammalian skeletal muscle development and growth. Particularly among the basal taxa of Vertebrata, however, data are limited describing the molecular mechanisms controlling the self-renewal, proliferation, and differentiation of MPCs. Of particular interest are potential mechanisms that underlie the ability of basal vertebrates to undergo considerable postlarval skeletal myofiber hyperplasia (i.e. teleost fish) and full regeneration following appendage loss (i.e. urodele amphibians). Additionally, the use of cultured myoblasts could aid in the understanding of regeneration and the recapitulation of the myogenic program and the differences between them. To this end, we describe in detail a robust and efficient protocol (and variations therein) for isolating and maintaining MPCs and their progeny, myoblasts and immature myotubes, in cell culture as a platform for understanding the evolution of the myogenic program, beginning with the more basal vertebrates. Capitalizing on the model organism status of the zebrafish (Danio rerio), we report on the application of this protocol to small fishes of the cyprinid clade Danioninae. In tandem, this protocol can be utilized to realize a broader comparative approach by isolating MPCs from the Mexican axolotl (Ambystomamexicanum) and even laboratory rodents. This protocol is now widely used in studying myogenesis in several fish species, including rainbow trout, salmon, and sea bream1-4.
Basic Protocol, Issue 86, myogenesis, zebrafish, myoblast, cell culture, giant danio, moustached danio, myotubes, proliferation, differentiation, Danioninae, axolotl
51354
Play Button
Isolation and Culture of Individual Myofibers and their Satellite Cells from Adult Skeletal Muscle
Authors: Alessandra Pasut, Andrew E. Jones, Michael A. Rudnicki.
Institutions: Ottawa Hospital Research Institute, University of Ottawa .
Muscle regeneration in the adult is performed by resident stem cells called satellite cells. Satellite cells are defined by their position between the basal lamina and the sarcolemma of each myofiber. Current knowledge of their behavior heavily relies on the use of the single myofiber isolation protocol. In 1985, Bischoff described a protocol to isolate single live fibers from the Flexor Digitorum Brevis (FDB) of adult rats with the goal to create an in vitro system in which the physical association between the myofiber and its stem cells is preserved 1. In 1995, Rosenblattmodified the Bischoff protocol such that myofibers are singly picked and handled separately after collagenase digestion instead of being isolated by gravity sedimentation 2, 3. The Rosenblatt or Bischoff protocol has since been adapted to different muscles, age or conditions 3-6. The single myofiber isolation technique is an indispensable tool due its unique advantages. First, in the single myofiber protocol, satellite cells are maintained beneath the basal lamina. This is a unique feature of the protocol as other techniques such as Fluorescence Activated Cell Sorting require chemical and mechanical tissue dissociation 7. Although the myofiber culture system cannot substitute for in vivo studies, it does offer an excellent platform to address relevant biological properties of muscle stem cells. Single myofibers can be cultured in standard plating conditions or in floating conditions. Satellite cells on floating myofibers are subjected to virtually no other influence than the myofiber environment. Substrate stiffness and coating have been shown to influence satellite cells' ability to regenerate muscles 8, 9 so being able to control each of these factors independently allows discrimination between niche-dependent and -independent responses. Different concentrations of serum have also been shown to have an effect on the transition from quiescence to activation. To preserve the quiescence state of its associated satellite cells, fibers should be kept in low serum medium 1-3. This is particularly useful when studying genes involved in the quiescence state. In serum rich medium, satellite cells quickly activate, proliferate, migrate and differentiate, thus mimicking the in vivo regenerative process 1-3. The system can be used to perform a variety of assays such as the testing of chemical inhibitors; ectopic expression of genes by virus delivery; oligonucleotide based gene knock-down or live imaging. This video article describes the protocol currently used in our laboratory to isolate single myofibers from the Extensor Digitorum Longus (EDL) muscle of adult mice (6-8 weeks old).
Stem Cell Biology, Issue 73, Cellular Biology, Molecular Biology, Medicine, Biomedical Engineering, Bioengineering, Physiology, Anatomy, Tissue Engineering, Stem Cells, Myoblasts, Skeletal, Satellite Cells, Skeletal Muscle, Muscular Dystrophy, Duchenne, Tissue Culture Techniques, Muscle regeneration, Pax7, isolation and culture of isolated myofibers, muscles, myofiber, immunostaining, cell culture, hindlimb, mouse, animal model
50074
Play Button
Assessment of Cardiac Function and Myocardial Morphology Using Small Animal Look-locker Inversion Recovery (SALLI) MRI in Rats
Authors: Sarah Jeuthe, Darach O H-Ici, Ulrich Kemnitz, Thore Dietrich, Bernhard Schnackenburg, Felix Berger, Titus Kuehne, Daniel Messroghli.
Institutions: German Heart Institute Berlin, German Heart Institute Berlin, Hamburg, Germany.
Small animal magnetic resonance imaging is an important tool to study cardiac function and changes in myocardial tissue. The high heart rates of small animals (200 to 600 beats/min) have previously limited the role of CMR imaging. Small animal Look-Locker inversion recovery (SALLI) is a T1 mapping sequence for small animals to overcome this problem 1. T1 maps provide quantitative information about tissue alterations and contrast agent kinetics. It is also possible to detect diffuse myocardial processes such as interstitial fibrosis or edema 1-6. Furthermore, from a single set of image data, it is possible to examine heart function and myocardial scarring by generating cine and inversion recovery-prepared late gadolinium enhancement-type MR images 1. The presented video shows step-by-step the procedures to perform small animal CMR imaging. Here it is presented with a healthy Sprague-Dawley rat, however naturally it can be extended to different cardiac small animal models.
Medicine, Issue 77, Biomedical Engineering, Anatomy, Physiology, Cardiology, Heart Diseases, Cardiomyopathies, Heart Failure, Diagnostic Imaging, Cardiac Imaging Techniques, Magnetic Resonance Imaging, MRI, Cardiovascular Diseases, small animal imaging, T1 mapping, heart disease, cardiac function, myocardium, rat, animal model
50397
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
50319
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
50887
Play Button
Time Multiplexing Super Resolving Technique for Imaging from a Moving Platform
Authors: Asaf Ilovitsh, Shlomo Zach, Zeev Zalevsky.
Institutions: Bar-Ilan University, Kfar Saba, Israel.
We propose a method for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained in a two-step process. First, three low resolution differently defocused images are being captured and the optical phase is retrieved using an improved iterative Gerchberg-Saxton based algorithm. The phase retrieval allows to numerically back propagate the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the Synthetic Aperture Radar (SAR) in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated through laboratory experiment.
Physics, Issue 84, Superresolution, Fourier optics, Remote Sensing and Sensors, Digital Image Processing, optics, resolution
51148
Play Button
Lensless Fluorescent Microscopy on a Chip
Authors: Ahmet F. Coskun, Ting-Wei Su, Ikbal Sencan, Aydogan Ozcan.
Institutions: University of California, Los Angeles .
On-chip lensless imaging in general aims to replace bulky lens-based optical microscopes with simpler and more compact designs, especially for high-throughput screening applications. This emerging technology platform has the potential to eliminate the need for bulky and/or costly optical components through the help of novel theories and digital reconstruction algorithms. Along the same lines, here we demonstrate an on-chip fluorescent microscopy modality that can achieve e.g., <4μm spatial resolution over an ultra-wide field-of-view (FOV) of >0.6-8 cm2 without the use of any lenses, mechanical-scanning or thin-film based interference filters. In this technique, fluorescent excitation is achieved through a prism or hemispherical-glass interface illuminated by an incoherent source. After interacting with the entire object volume, this excitation light is rejected by total-internal-reflection (TIR) process that is occurring at the bottom of the sample micro-fluidic chip. The fluorescent emission from the excited objects is then collected by a fiber-optic faceplate or a taper and is delivered to an optoelectronic sensor array such as a charge-coupled-device (CCD). By using a compressive-sampling based decoding algorithm, the acquired lensfree raw fluorescent images of the sample can be rapidly processed to yield e.g., <4μm resolution over an FOV of >0.6-8 cm2. Moreover, vertically stacked micro-channels that are separated by e.g., 50-100 μm can also be successfully imaged using the same lensfree on-chip microscopy platform, which further increases the overall throughput of this modality. This compact on-chip fluorescent imaging platform, with a rapid compressive decoder behind it, could be rather valuable for high-throughput cytometry, rare-cell research and microarray-analysis.
Bioengineering, Issue 54, Lensless Microscopy, Fluorescent On-chip Imaging, Wide-field Microscopy, On-Chip Cytometry, Compressive Sampling/Sensing
3181
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Authors: Tadd T. Truscott, Jesse Belden, Joseph R. Nielson, David J. Daily, Scott L. Thomson.
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
4325
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Ole Isacson: Development of New Therapies for Parkinson's Disease
Authors: Ole Isacson.
Institutions: Harvard Medical School.
Medicine, Issue 3, Parkinson' disease, Neuroscience, dopamine, neuron, L-DOPA, stem cell, transplantation
189
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.