JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Semi-automatic normalization of multitemporal remote images based on vegetative pseudo-invariant features.
PUBLISHED: 01-01-2014
A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method's efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P?=?0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified.
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Published: 08-13-2014
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
26 Related JoVE Articles!
Play Button
Analysis of Targeted Viral Protein Nanoparticles Delivered to HER2+ Tumors
Authors: Jae Youn Hwang, Daniel L. Farkas, Lali K. Medina-Kauwe.
Institutions: University of Southern California, Cedars-Sinai Medical Center, University of California, Los Angeles.
The HER2+ tumor-targeted nanoparticle, HerDox, exhibits tumor-preferential accumulation and tumor-growth ablation in an animal model of HER2+ cancer. HerDox is formed by non-covalent self-assembly of a tumor targeted cell penetration protein with the chemotherapy agent, doxorubicin, via a small nucleic acid linker. A combination of electrophilic, intercalation, and oligomerization interactions facilitate self-assembly into round 10-20 nm particles. HerDox exhibits stability in blood as well as in extended storage at different temperatures. Systemic delivery of HerDox in tumor-bearing mice results in tumor-cell death with no detectable adverse effects to non-tumor tissue, including the heart and liver (which undergo marked damage by untargeted doxorubicin). HER2 elevation facilitates targeting to cells expressing the human epidermal growth factor receptor, hence tumors displaying elevated HER2 levels exhibit greater accumulation of HerDox compared to cells expressing lower levels, both in vitro and in vivo. Fluorescence intensity imaging combined with in situ confocal and spectral analysis has allowed us to verify in vivo tumor targeting and tumor cell penetration of HerDox after systemic delivery. Here we detail our methods for assessing tumor targeting via multimode imaging after systemic delivery.
Biomedical Engineering, Issue 76, Cancer Biology, Medicine, Bioengineering, Molecular Biology, Cellular Biology, Biochemistry, Nanotechnology, Nanomedicine, Drug Delivery Systems, Molecular Imaging, optical imaging devices (design and techniques), HerDox, Nanoparticle, Tumor, Targeting, Self-Assembly, Doxorubicin, Human Epidermal Growth Factor, HER, HER2+, Receptor, mice, animal model, tumors, imaging
Play Button
V3 Stain-free Workflow for a Practical, Convenient, and Reliable Total Protein Loading Control in Western Blotting
Authors: Anton Posch, Jonathan Kohn, Kenneth Oh, Matt Hammond, Ning Liu.
Institutions: Bio-Rad Laboratories.
The western blot is a very useful and widely adopted lab technique, but its execution is challenging. The workflow is often characterized as a "black box" because an experimentalist does not know if it has been performed successfully until the last of several steps. Moreover, the quality of western blot data is sometimes challenged due to a lack of effective quality control tools in place throughout the western blotting process. Here we describe the V3 western workflow, which applies stain-free technology to address the major concerns associated with the traditional western blot protocol. This workflow allows researchers: 1) to run a gel in about 20-30 min; 2) to visualize sample separation quality within 5 min after the gel run; 3) to transfer proteins in 3-10 min; 4) to verify transfer efficiency quantitatively; and most importantly 5) to validate changes in the level of the protein of interest using total protein loading control. This novel approach eliminates the need of stripping and reprobing the blot for housekeeping proteins such as β-actin, β-tubulin, GAPDH, etc. The V3 stain-free workflow makes the western blot process faster, transparent, more quantitative and reliable.
Basic Protocol, Issue 82, Biotechnology, Pharmaceutical, Protein electrophoresis, Western blot, Stain-Free, loading control, total protein normalization, stain-free technology
Play Button
Easy Measurement of Diffusion Coefficients of EGFP-tagged Plasma Membrane Proteins Using k-Space Image Correlation Spectroscopy
Authors: Eva C. Arnspang, Jennifer S. Koffman, Saw Marlar, Paul W. Wiseman, Lene N. Nejsum.
Institutions: Aarhus University, McGill University.
Lateral diffusion and compartmentalization of plasma membrane proteins are tightly regulated in cells and thus, studying these processes will reveal new insights to plasma membrane protein function and regulation. Recently, k-Space Image Correlation Spectroscopy (kICS)1 was developed to enable routine measurements of diffusion coefficients directly from images of fluorescently tagged plasma membrane proteins, that avoided systematic biases introduced by probe photophysics. Although the theoretical basis for the analysis is complex, the method can be implemented by nonexperts using a freely available code to measure diffusion coefficients of proteins. kICS calculates a time correlation function from a fluorescence microscopy image stack after Fourier transformation of each image to reciprocal (k-) space. Subsequently, circular averaging, natural logarithm transform and linear fits to the correlation function yields the diffusion coefficient. This paper provides a step-by-step guide to the image analysis and measurement of diffusion coefficients via kICS. First, a high frame rate image sequence of a fluorescently labeled plasma membrane protein is acquired using a fluorescence microscope. Then, a region of interest (ROI) avoiding intracellular organelles, moving vesicles or protruding membrane regions is selected. The ROI stack is imported into a freely available code and several defined parameters (see Method section) are set for kICS analysis. The program then generates a "slope of slopes" plot from the k-space time correlation functions, and the diffusion coefficient is calculated from the slope of the plot. Below is a step-by-step kICS procedure to measure the diffusion coefficient of a membrane protein using the renal water channel aquaporin-3 tagged with EGFP as a canonical example.
Biophysics, Issue 87, Amino Acids, Peptides and Proteins, Computer Programming and Software, Diffusion coefficient, Aquaporin-3, k-Space Image Correlation Spectroscopy, Analysis
Play Button
Eye Movement Monitoring of Memory
Authors: Jennifer D. Ryan, Lily Riggs, Douglas A. McQuiggan.
Institutions: Rotman Research Institute, University of Toronto, University of Toronto.
Explicit (often verbal) reports are typically used to investigate memory (e.g. "Tell me what you remember about the person you saw at the bank yesterday."), however such reports can often be unreliable or sensitive to response bias 1, and may be unobtainable in some participant populations. Furthermore, explicit reports only reveal when information has reached consciousness and cannot comment on when memories were accessed during processing, regardless of whether the information is subsequently accessed in a conscious manner. Eye movement monitoring (eye tracking) provides a tool by which memory can be probed without asking participants to comment on the contents of their memories, and access of such memories can be revealed on-line 2,3. Video-based eye trackers (either head-mounted or remote) use a system of cameras and infrared markers to examine the pupil and corneal reflection in each eye as the participant views a display monitor. For head-mounted eye trackers, infrared markers are also used to determine head position to allow for head movement and more precise localization of eye position. Here, we demonstrate the use of a head-mounted eye tracking system to investigate memory performance in neurologically-intact and neurologically-impaired adults. Eye movement monitoring procedures begin with the placement of the eye tracker on the participant, and setup of the head and eye cameras. Calibration and validation procedures are conducted to ensure accuracy of eye position recording. Real-time recordings of X,Y-coordinate positions on the display monitor are then converted and used to describe periods of time in which the eye is static (i.e. fixations) versus in motion (i.e., saccades). Fixations and saccades are time-locked with respect to the onset/offset of a visual display or another external event (e.g. button press). Experimental manipulations are constructed to examine how and when patterns of fixations and saccades are altered through different types of prior experience. The influence of memory is revealed in the extent to which scanning patterns to new images differ from scanning patterns to images that have been previously studied 2, 4-5. Memory can also be interrogated for its specificity; for instance, eye movement patterns that differ between an identical and an altered version of a previously studied image reveal the storage of the altered detail in memory 2-3, 6-8. These indices of memory can be compared across participant populations, thereby providing a powerful tool by which to examine the organization of memory in healthy individuals, and the specific changes that occur to memory with neurological insult or decline 2-3, 8-10.
Neuroscience, Issue 42, eye movement monitoring, eye tracking, memory, aging, amnesia, visual processing
Play Button
Automated Sholl Analysis of Digitized Neuronal Morphology at Multiple Scales
Authors: Melinda K. Kutzing, Christopher G. Langhammer, Vincent Luo, Hersh Lakdawala, Bonnie L. Firestein.
Institutions: Rutgers University, Rutgers University.
Neuronal morphology plays a significant role in determining how neurons function and communicate1-3. Specifically, it affects the ability of neurons to receive inputs from other cells2 and contributes to the propagation of action potentials4,5. The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network6,7. Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease8, schizophrenia9,10, and mental retardation11. The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations12. The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1.
Neuroscience, Issue 45, Sholl Analysis, Neurite, Morphology, Computer-assisted, Tracing
Play Button
Culturing and Maintaining Clostridium difficile in an Anaerobic Environment
Authors: Adrianne N. Edwards, Jose M. Suárez, Shonna M. McBride.
Institutions: Emory University School of Medicine.
Clostridium difficile is a Gram-positive, anaerobic, sporogenic bacterium that is primarily responsible for antibiotic associated diarrhea (AAD) and is a significant nosocomial pathogen. C. difficile is notoriously difficult to isolate and cultivate and is extremely sensitive to even low levels of oxygen in the environment. Here, methods for isolating C. difficile from fecal samples and subsequently culturing C. difficile for preparation of glycerol stocks for long-term storage are presented. Techniques for preparing and enumerating spore stocks in the laboratory for a variety of downstream applications including microscopy and animal studies are also described. These techniques necessitate an anaerobic chamber, which maintains a consistent anaerobic environment to ensure proper conditions for optimal C. difficile growth. We provide protocols for transferring materials in and out of the chamber without causing significant oxygen contamination along with suggestions for regular maintenance required to sustain the appropriate anaerobic environment for efficient and consistent C. difficile cultivation.
Immunology, Issue 79, Genetics, Bacteria, Anaerobic, Gram-Positive Endospore-Forming Rods, Spores, Bacterial, Gram-Positive Bacterial Infections, Clostridium Infections, Bacteriology, Clostridium difficile, Gram-positive, anaerobic chamber, spore, culturing, maintenance, cell culture
Play Button
SIVQ-LCM Protocol for the ArcturusXT Instrument
Authors: Jason D. Hipp, Jerome Cheng, Jeffrey C. Hanson, Avi Z. Rosenberg, Michael R. Emmert-Buck, Michael A. Tangrea, Ulysses J. Balis.
Institutions: National Institutes of Health, University of Michigan.
SIVQ-LCM is a new methodology that automates and streamlines the more traditional, user-dependent laser dissection process. It aims to create an advanced, rapidly customizable laser dissection platform technology. In this report, we describe the integration of the image analysis software Spatially Invariant Vector Quantization (SIVQ) onto the ArcturusXT instrument. The ArcturusXT system contains both an infrared (IR) and ultraviolet (UV) laser, allowing for specific cell or large area dissections. The principal goal is to improve the speed, accuracy, and reproducibility of the laser dissection to increase sample throughput. This novel approach facilitates microdissection of both animal and human tissues in research and clinical workflows.
Bioengineering, Issue 89, SIVQ, LCM, personalized medicine, digital pathology, image analysis, ArcturusXT
Play Button
A Microscopic Phenotypic Assay for the Quantification of Intracellular Mycobacteria Adapted for High-throughput/High-content Screening
Authors: Christophe. J Queval, Ok-Ryul Song, Vincent Delorme, Raffaella Iantomasi, Romain Veyron-Churlet, Nathalie Deboosère, Valérie Landry, Alain Baulard, Priscille Brodin.
Institutions: Université de Lille.
Despite the availability of therapy and vaccine, tuberculosis (TB) remains one of the most deadly and widespread bacterial infections in the world. Since several decades, the sudden burst of multi- and extensively-drug resistant strains is a serious threat for the control of tuberculosis. Therefore, it is essential to identify new targets and pathways critical for the causative agent of the tuberculosis, Mycobacterium tuberculosis (Mtb) and to search for novel chemicals that could become TB drugs. One approach is to set up methods suitable for the genetic and chemical screens of large scale libraries enabling the search of a needle in a haystack. To this end, we developed a phenotypic assay relying on the detection of fluorescently labeled Mtb within fluorescently labeled host cells using automated confocal microscopy. This in vitro assay allows an image based quantification of the colonization process of Mtb into the host and was optimized for the 384-well microplate format, which is proper for screens of siRNA-, chemical compound- or Mtb mutant-libraries. The images are then processed for multiparametric analysis, which provides read out inferring on the pathogenesis of Mtb within host cells.
Infection, Issue 83, Mycobacterium tuberculosis, High-content/High-throughput screening, chemogenomics, Drug Discovery, siRNA library, automated confocal microscopy, image-based analysis
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Play Button
Remote Magnetic Navigation for Accurate, Real-time Catheter Positioning and Ablation in Cardiac Electrophysiology Procedures
Authors: David Filgueiras-Rama, Alejandro Estrada, Josh Shachar, Sergio Castrejón, David Doiny, Marta Ortega, Eli Gang, José L. Merino.
Institutions: La Paz University Hospital, Magnetecs Corp., Geffen School of Medicine at UCLA Los Angeles.
New remote navigation systems have been developed to improve current limitations of conventional manually guided catheter ablation in complex cardiac substrates such as left atrial flutter. This protocol describes all the clinical and invasive interventional steps performed during a human electrophysiological study and ablation to assess the accuracy, safety and real-time navigation of the Catheter Guidance, Control and Imaging (CGCI) system. Patients who underwent ablation of a right or left atrium flutter substrate were included. Specifically, data from three left atrial flutter and two counterclockwise right atrial flutter procedures are shown in this report. One representative left atrial flutter procedure is shown in the movie. This system is based on eight coil-core electromagnets, which generate a dynamic magnetic field focused on the heart. Remote navigation by rapid changes (msec) in the magnetic field magnitude and a very flexible magnetized catheter allow real-time closed-loop integration and accurate, stable positioning and ablation of the arrhythmogenic substrate.
Medicine, Issue 74, Anatomy, Physiology, Biomedical Engineering, Surgery, Cardiology, catheter ablation, remote navigation, magnetic, robotic, catheter, positioning, electrophysiology, clinical techniques
Play Button
Quantitative Measurement of Invadopodia-mediated Extracellular Matrix Proteolysis in Single and Multicellular Contexts
Authors: Karen H. Martin, Karen E. Hayes, Elyse L. Walk, Amanda Gatesman Ammer, Steven M. Markwell, Scott A. Weed.
Institutions: West Virginia University .
Cellular invasion into local tissues is a process important in development and homeostasis. Malregulated invasion and subsequent cell movement is characteristic of multiple pathological processes, including inflammation, cardiovascular disease and tumor cell metastasis1. Focalized proteolytic degradation of extracellular matrix (ECM) components in the epithelial or endothelial basement membrane is a critical step in initiating cellular invasion. In tumor cells, extensive in vitro analysis has determined that ECM degradation is accomplished by ventral actin-rich membrane protrusive structures termed invadopodia2,3. Invadopodia form in close apposition to the ECM, where they moderate ECM breakdown through the action of matrix metalloproteinases (MMPs). The ability of tumor cells to form invadopodia directly correlates with the ability to invade into local stroma and associated vascular components3. Visualization of invadopodia-mediated ECM degradation of cells by fluorescent microscopy using dye-labeled matrix proteins coated onto glass coverslips has emerged as the most prevalent technique for evaluating the degree of matrix proteolysis and cellular invasive potential4,5. Here we describe a version of the standard method for generating fluorescently-labeled glass coverslips utilizing a commercially available Oregon Green-488 gelatin conjugate. This method is easily scaled to rapidly produce large numbers of coated coverslips. We show some of the common microscopic artifacts that are often encountered during this procedure and how these can be avoided. Finally, we describe standardized methods using readily available computer software to allow quantification of labeled gelatin matrix degradation mediated by individual cells and by entire cellular populations. The described procedures provide the ability to accurately and reproducibly monitor invadopodia activity, and can also serve as a platform for evaluating the efficacy of modulating protein expression or testing of anti-invasive compounds on extracellular matrix degradation in single and multicellular settings.
Cellular Biology, Issue 66, Cancer Biology, Anatomy, Molecular Biology, Biochemistry, invadopodia, extracellular matrix, gelatin, confocal microscopy, quantification, oregon green
Play Button
Label-free in situ Imaging of Lignification in Plant Cell Walls
Authors: Martin Schmidt, Pradeep Perera, Adam M. Schwartzberg, Paul D. Adams, P. James Schuck.
Institutions: University of California, Berkeley, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Meeting growing energy demands safely and efficiently is a pressing global challenge. Therefore, research into biofuels production that seeks to find cost-effective and sustainable solutions has become a topical and critical task. Lignocellulosic biomass is poised to become the primary source of biomass for the conversion to liquid biofuels1-6. However, the recalcitrance of these plant cell wall materials to cost-effective and efficient degradation presents a major impediment for their use in the production of biofuels and chemicals4. In particular, lignin, a complex and irregular poly-phenylpropanoid heteropolymer, becomes problematic to the postharvest deconstruction of lignocellulosic biomass. For example in biomass conversion for biofuels, it inhibits saccharification in processes aimed at producing simple sugars for fermentation7. The effective use of plant biomass for industrial purposes is in fact largely dependent on the extent to which the plant cell wall is lignified. The removal of lignin is a costly and limiting factor8 and lignin has therefore become a key plant breeding and genetic engineering target in order to improve cell wall conversion. Analytical tools that permit the accurate rapid characterization of lignification of plant cell walls become increasingly important for evaluating a large number of breeding populations. Extractive procedures for the isolation of native components such as lignin are inevitably destructive, bringing about significant chemical and structural modifications9-11. Analytical chemical in situ methods are thus invaluable tools for the compositional and structural characterization of lignocellulosic materials. Raman microscopy is a technique that relies on inelastic or Raman scattering of monochromatic light, like that from a laser, where the shift in energy of the laser photons is related to molecular vibrations and presents an intrinsic label-free molecular "fingerprint" of the sample. Raman microscopy can afford non-destructive and comparatively inexpensive measurements with minimal sample preparation, giving insights into chemical composition and molecular structure in a close to native state. Chemical imaging by confocal Raman microscopy has been previously used for the visualization of the spatial distribution of cellulose and lignin in wood cell walls12-14. Based on these earlier results, we have recently adopted this method to compare lignification in wild type and lignin-deficient transgenic Populus trichocarpa (black cottonwood) stem wood15. Analyzing the lignin Raman bands16,17 in the spectral region between 1,600 and 1,700 cm-1, lignin signal intensity and localization were mapped in situ. Our approach visualized differences in lignin content, localization, and chemical composition. Most recently, we demonstrated Raman imaging of cell wall polymers in Arabidopsis thaliana with lateral resolution that is sub-μm18. Here, this method is presented affording visualization of lignin in plant cell walls and comparison of lignification in different tissues, samples or species without staining or labeling of the tissues.
Plant Biology, Issue 45, Raman microscopy, lignin, poplar wood, Arabidopsis thaliana
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Authors: Mayandi Sivaguru, Glenn A. Fried, Carly A. H. Miller, Bruce W. Fouke.
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis and M. faveolata. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis and M. faveolata contain similar types of chlorophyll and chromatophores. However, M. annularis and M. faveolata exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Authors: Hans Maaswinkel, Liqun Zhu, Wei Weng.
Institutions: xyZfish.
Like many aquatic animals, zebrafish (Danio rerio) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Detection of Functional Matrix Metalloproteinases by Zymography
Authors: Xueyou Hu, Christine Beeton.
Institutions: Baylor College of Medicine.
Matrix metalloproteinases (MMPs) are zinc-containing endopeptidases. They degrade proteins by cleavage of peptide bonds. More than twenty MMPs have been identified and are separated into six groups based on their structure and substrate specificity (collagenases, gelatinases, membrane type [MT-MMP], stromelysins, matrilysins, and others). MMPs play a critical role in cell invasion, cartilage degradation, tissue remodeling, wound healing, and embryogenesis. They therefore participate in both normal processes and in the pathogenesis of many diseases, such as rheumatoid arthritis, cancer, or chronic obstructive pulmonary disease1-6. Here, we will focus on MMP-2 (gelatinase A, type IV collagenase), a widely expressed MMP. We will demonstrate how to detect MMP-2 in cell culture supernatants by zymography, a commonly used, simple, and yet very sensitive technique first described in 1980 by C. Heussen and E.B. Dowdle7-10. This technique is semi-quantitative, it can therefore be used to determine MMP levels in test samples when known concentrations of recombinant MMP are loaded on the same gel11. Solutions containing MMPs (e.g. cell culture supernatants, urine, or serum) are loaded onto a polyacrylamide gel containing sodium dodecyl sulfate (SDS; to linearize the proteins) and gelatin (substrate for MMP-2). The sample buffer is designed to increase sample viscosity (to facilitate gel loading), provide a tracking dye (bromophenol blue; to monitor sample migration), provide denaturing molecules (to linearize proteins), and control the pH of the sample. Proteins are then allowed to migrate under an electric current in a running buffer designed to provide a constant migration rate. The distance of migration is inversely correlated with the molecular weight of the protein (small proteins move faster through the gel than large proteins do and therefore migrate further down the gel). After migration, the gel is placed in a renaturing buffer to allow proteins to regain their tertiary structure, necessary for enzymatic activity. The gel is then placed in a developing buffer designed to allow the protease to digest its substrate. The developing buffer also contains p-aminophenylmercuric acetate (APMA) to activate the non-proteolytic pro-MMPs into active MMPs. The next step consists of staining the substrate (gelatin in our example). After washing the excess dye off the gel, areas of protease digestion appear as clear bands. The clearer the band, the more concentrated the protease it contains. Band staining intensity can then be determined by densitometry, using a software such as ImageJ, allowing for sample comparison.
Basic Protocols, Issue 45, Protease, enzyme, electrophoresis, gelatin, casein, fibrin
Play Button
Tomato Analyzer: A Useful Software Application to Collect Accurate and Detailed Morphological and Colorimetric Data from Two-dimensional Objects
Authors: Gustavo R. Rodríguez, Jennifer B. Moyseenko, Matthew D. Robbins, Nancy Huarachi Morejón, David M. Francis, Esther van der Knaap.
Institutions: The Ohio State University.
Measuring fruit morphology and color traits of vegetable and fruit crops in an objective and reproducible way is important for detailed phenotypic analyses of these traits. Tomato Analyzer (TA) is a software program that measures 37 attributes related to two-dimensional shape in a semi-automatic and reproducible manner1,2. Many of these attributes, such as angles at the distal and proximal ends of the fruit and areas of indentation, are difficult to quantify manually. The attributes are organized in ten categories within the software: Basic Measurement, Fruit Shape Index, Blockiness, Homogeneity, Proximal Fruit End Shape, Distal Fruit End Shape, Asymmetry, Internal Eccentricity, Latitudinal Section and Morphometrics. The last category requires neither prior knowledge nor predetermined notions of the shape attributes, so morphometric analysis offers an unbiased option that may be better adapted to high-throughput analyses than attribute analysis. TA also offers the Color Test application that was designed to collect color measurements from scanned images and allow scanning devices to be calibrated using color standards3. TA provides several options to export and analyze shape attribute, morphometric, and color data. The data may be exported to an excel file in batch mode (more than 100 images at one time) or exported as individual images. The user can choose between output that displays the average for each attribute for the objects in each image (including standard deviation), or an output that displays the attribute values for each object on the image. TA has been a valuable and effective tool for indentifying and confirming tomato fruit shape Quantitative Trait Loci (QTL), as well as performing in-depth analyses of the effect of key fruit shape genes on plant morphology. Also, TA can be used to objectively classify fruit into various shape categories. Lastly, fruit shape and color traits in other plant species as well as other plant organs such as leaves and seeds can be evaluated with TA.
Plant Biology, Issue 37, morphology, color, image processing, quantitative trait loci, software
Play Button
Simultaneous fMRI and Electrophysiology in the Rodent Brain
Authors: Wen-ju Pan, Garth Thompson, Matthew Magnuson, Waqas Majeed, Dieter Jaeger, Shella Keilholz.
Institutions: Emory University, Georgia Institute of Technology, Emory University.
To examine the neural basis of the blood oxygenation level dependent (BOLD) magnetic resonance imaging (MRI) signal, we have developed a rodent model in which functional MRI data and in vivo intracortical recording can be performed simultaneously. The combination of MRI and electrical recording is technically challenging because the electrodes used for recording distort the MRI images and the MRI acquisition induces noise in the electrical recording. To minimize the mutual interference of the two modalities, glass microelectrodes were used rather than metal and a noise removal algorithm was implemented for the electrophysiology data. In our studies, two microelectrodes were separately implanted in bilateral primary somatosensory cortices (SI) of the rat and fixed in place. One coronal slice covering the electrode tips was selected for functional MRI. Electrode shafts and fixation positions were not included in the image slice to avoid imaging artifacts. The removed scalp was replaced with toothpaste to reduce susceptibility mismatch and prevent Gibbs ringing artifacts in the images. The artifact structure induced in the electrical recordings by the rapidly-switching magnetic fields during image acquisition was characterized by averaging all cycles of scans for each run. The noise structure during imaging was then subtracted from original recordings. The denoised time courses were then used for further analysis in combination with the fMRI data. As an example, the simultaneous acquisition was used to determine the relationship between spontaneous fMRI BOLD signals and band-limited intracortical electrical activity. Simultaneous fMRI and electrophysiological recording in the rodent will provide a platform for many exciting applications in neuroscience in addition to elucidating the relationship between the fMRI BOLD signal and neuronal activity.
Neuroscience, Issue 42, fMRI, electrophysiology, rat, BOLD, brain, resting state
Play Button
Born Normalization for Fluorescence Optical Projection Tomography for Whole Heart Imaging
Authors: Claudio Vinegoni, Daniel Razansky, Jose-Luiz Figueiredo, Lyuba Fexon, Misha Pivovarov, Matthias Nahrendorf, Vasilis Ntziachristos, Ralph Weissleder.
Institutions: Harvard Medical School, MGH - Massachusetts General Hospital, Technical University of Munich and Helmholtz Center Munich.
Optical projection tomography is a three-dimensional imaging technique that has been recently introduced as an imaging tool primarily in developmental biology and gene expression studies. The technique renders biological sample optically transparent by first dehydrating them and then placing in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution). The technique renders biological samples optically transparent by first dehydrating them in graded ethanol solutions then placing them in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution) to clear. After the clearing process the scattering contribution in the sample can be greatly reduced and made almost negligible while the absorption contribution cannot be eliminated completely. When trying to reconstruct the fluorescence distribution within the sample under investigation, this contribution affects the reconstructions and leads, inevitably, to image artifacts and quantification errors.. While absorption could be reduced further with a permanence of weeks or months in the clearing media, this will lead to progressive loss of fluorescence and to an unrealistically long sample processing time. This is true when reconstructing both exogenous contrast agents (molecular contrast agents) as well as endogenous contrast (e.g. reconstructions of genetically expressed fluorescent proteins).
Bioengineering, Issue 28, optical imaging, fluorescence imaging, optical projection tomography, born normalization, molecular imaging, heart imaging
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.