JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.
PUBLISHED: 01-01-2014
Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.
Authors: Andreas Florian Haas, Ben Knowles, Yan Wei Lim, Tracey McDole Somera, Linda Wegley Kelly, Mark Hatay, Forest Rohwer.
Published: 11-05-2014
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
24 Related JoVE Articles!
Play Button
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Authors: Cila Herman, Muge Pirtini Cetingul.
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4. Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8 several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9, allowed breakthroughs in the field. In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
Play Button
Optical Recording of Suprathreshold Neural Activity with Single-cell and Single-spike Resolution
Authors: Gayathri Nattar Ranganathan, Helmut J. Koester.
Institutions: The University of Texas at Austin.
Signaling of information in the vertebrate central nervous system is often carried by populations of neurons rather than individual neurons. Also propagation of suprathreshold spiking activity involves populations of neurons. Empirical studies addressing cortical function directly thus require recordings from populations of neurons with high resolution. Here we describe an optical method and a deconvolution algorithm to record neural activity from up to 100 neurons with single-cell and single-spike resolution. This method relies on detection of the transient increases in intracellular somatic calcium concentration associated with suprathreshold electrical spikes (action potentials) in cortical neurons. High temporal resolution of the optical recordings is achieved by a fast random-access scanning technique using acousto-optical deflectors (AODs)1. Two-photon excitation of the calcium-sensitive dye results in high spatial resolution in opaque brain tissue2. Reconstruction of spikes from the fluorescence calcium recordings is achieved by a maximum-likelihood method. Simultaneous electrophysiological and optical recordings indicate that our method reliably detects spikes (>97% spike detection efficiency), has a low rate of false positive spike detection (< 0.003 spikes/sec), and a high temporal precision (about 3 msec) 3. This optical method of spike detection can be used to record neural activity in vitro and in anesthetized animals in vivo3,4.
Neuroscience, Issue 67, functional calcium imaging, spatiotemporal patterns of activity, dithered random-access scanning
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Play Button
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Authors: Bianca DeBenedictis, J. Bruce Morton.
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Play Button
Single Particle Electron Microscopy Reconstruction of the Exosome Complex Using the Random Conical Tilt Method
Authors: Xueqi Liu, Hong-Wei Wang.
Institutions: Yale University.
Single particle electron microscopy (EM) reconstruction has recently become a popular tool to get the three-dimensional (3D) structure of large macromolecular complexes. Compared to X-ray crystallography, it has some unique advantages. First, single particle EM reconstruction does not need to crystallize the protein sample, which is the bottleneck in X-ray crystallography, especially for large macromolecular complexes. Secondly, it does not need large amounts of protein samples. Compared with milligrams of proteins necessary for crystallization, single particle EM reconstruction only needs several micro-liters of protein solution at nano-molar concentrations, using the negative staining EM method. However, despite a few macromolecular assemblies with high symmetry, single particle EM is limited at relatively low resolution (lower than 1 nm resolution) for many specimens especially those without symmetry. This technique is also limited by the size of the molecules under study, i.e. 100 kDa for negatively stained specimens and 300 kDa for frozen-hydrated specimens in general. For a new sample of unknown structure, we generally use a heavy metal solution to embed the molecules by negative staining. The specimen is then examined in a transmission electron microscope to take two-dimensional (2D) micrographs of the molecules. Ideally, the protein molecules have a homogeneous 3D structure but exhibit different orientations in the micrographs. These micrographs are digitized and processed in computers as "single particles". Using two-dimensional alignment and classification techniques, homogenous molecules in the same views are clustered into classes. Their averages enhance the signal of the molecule's 2D shapes. After we assign the particles with the proper relative orientation (Euler angles), we will be able to reconstruct the 2D particle images into a 3D virtual volume. In single particle 3D reconstruction, an essential step is to correctly assign the proper orientation of each single particle. There are several methods to assign the view for each particle, including the angular reconstitution1 and random conical tilt (RCT) method2. In this protocol, we describe our practice in getting the 3D reconstruction of yeast exosome complex using negative staining EM and RCT. It should be noted that our protocol of electron microscopy and image processing follows the basic principle of RCT but is not the only way to perform the method. We first describe how to embed the protein sample into a layer of Uranyl-Formate with a thickness comparable to the protein size, using a holey carbon grid covered with a layer of continuous thin carbon film. Then the specimen is inserted into a transmission electron microscope to collect untilted (0-degree) and tilted (55-degree) pairs of micrographs that will be used later for processing and obtaining an initial 3D model of the yeast exosome. To this end, we perform RCT and then refine the initial 3D model by using the projection matching refinement method3.
Structural Biology, Issue 49, Electron microscopy, single particle three-dimensional reconstruction, exosome complex, negative staining
Play Button
Measurement of Greenhouse Gas Flux from Agricultural Soils Using Static Chambers
Authors: Sarah M. Collier, Matthew D. Ruark, Lawrence G. Oates, William E. Jokela, Curtis J. Dell.
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, USDA-ARS Dairy Forage Research Center, USDA-ARS Pasture Systems Watershed Management Research Unit.
Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies.
Environmental Sciences, Issue 90, greenhouse gas, trace gas, gas flux, static chamber, soil, field, agriculture, climate
Play Button
Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling
Authors: Jennifer L Soong, Dan Reuss, Colin Pinney, Ty Boyack, Michelle L Haddix, Catherine E Stewart, M. Francesca Cotrufo.
Institutions: Colorado State University, USDA-ARS, Colorado State University.
Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13C with 15N, 18O or 2H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4. From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e. leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7. The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13C and 15N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13C and 15N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13C and 6.7 atom%15N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13C and 0.56 atom%15N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13C-CO2 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling.
Environmental Sciences, Issue 83, 13C, 15N, plant, stable isotope labeling, Andropogon gerardii, metabolic compounds, structural compounds, hot water extraction
Play Button
Fabrication, Densification, and Replica Molding of 3D Carbon Nanotube Microstructures
Authors: Davor Copic, Sei Jin Park, Sameh Tawfick, Michael De Volder, A. John Hart.
Institutions: University of Michigan , IMEC, Belgium.
The introduction of new materials and processes to microfabrication has, in large part, enabled many important advances in microsystems, lab-on-a-chip devices, and their applications. In particular, capabilities for cost-effective fabrication of polymer microstructures were transformed by the advent of soft lithography and other micromolding techniques 1, 2, and this led a revolution in applications of microfabrication to biomedical engineering and biology. Nevertheless, it remains challenging to fabricate microstructures with well-defined nanoscale surface textures, and to fabricate arbitrary 3D shapes at the micro-scale. Robustness of master molds and maintenance of shape integrity is especially important to achieve high fidelity replication of complex structures and preserving their nanoscale surface texture. The combination of hierarchical textures, and heterogeneous shapes, is a profound challenge to existing microfabrication methods that largely rely upon top-down etching using fixed mask templates. On the other hand, the bottom-up synthesis of nanostructures such as nanotubes and nanowires can offer new capabilities to microfabrication, in particular by taking advantage of the collective self-organization of nanostructures, and local control of their growth behavior with respect to microfabricated patterns. Our goal is to introduce vertically aligned carbon nanotubes (CNTs), which we refer to as CNT "forests", as a new microfabrication material. We present details of a suite of related methods recently developed by our group: fabrication of CNT forest microstructures by thermal CVD from lithographically patterned catalyst thin films; self-directed elastocapillary densification of CNT microstructures; and replica molding of polymer microstructures using CNT composite master molds. In particular, our work shows that self-directed capillary densification ("capillary forming"), which is performed by condensation of a solvent onto the substrate with CNT microstructures, significantly increases the packing density of CNTs. This process enables directed transformation of vertical CNT microstructures into straight, inclined, and twisted shapes, which have robust mechanical properties exceeding those of typical microfabrication polymers. This in turn enables formation of nanocomposite CNT master molds by capillary-driven infiltration of polymers. The replica structures exhibit the anisotropic nanoscale texture of the aligned CNTs, and can have walls with sub-micron thickness and aspect ratios exceeding 50:1. Integration of CNT microstructures in fabrication offers further opportunity to exploit the electrical and thermal properties of CNTs, and diverse capabilities for chemical and biochemical functionalization 3.
Mechanical Engineering, Issue 65, Physics, Carbon nanotube, microstructure, fabrication, molding, transfer, polymer
Play Button
Methods for Facilitating Microbial Growth on Pulp Mill Waste Streams and Characterization of the Biodegradation Potential of Cultured Microbes
Authors: Stephanie L. Mathews, Ali S. Ayoub, Joel Pawlak, Amy M. Grunden.
Institutions: North Carolina State University, North Carolina State University.
The kraft process is applied to wood chips for separation of lignin from the polysaccharides within lignocellulose for pulp that will produce a high quality paper. Black liquor is a pulping waste generated by the kraft process that has potential for downstream bioconversion. However, the recalcitrant nature of the lignocellulose resources, its chemical derivatives that constitute the majority of available organic carbon within black liquor, and its basic pH present challenges to microbial biodegradation of this waste material. Methods for the collection and modification of black liquor for microbial growth are aimed at utilization of this pulp waste to convert the lignin, organic acids, and polysaccharide degradation byproducts into valuable chemicals. The lignocellulose extraction techniques presented provide a reproducible method for preparation of lignocellulose growth substrates for understanding metabolic capacities of cultured microorganisms. Use of gas chromatography-mass spectrometry enables the identification and quantification of the fermentation products resulting from the growth of microorganisms on pulping waste. These methods when used together can facilitate the determination of the metabolic activity of microorganisms with potential to produce fermentation products that would provide greater value to the pulping system and reduce effluent waste, thereby increasing potential paper milling profits and offering additional uses for black liquor.
Environmental Sciences, Issue 82, biodegradation (bacterial degradation), pulp mill waste, black liquor, kraft process, lignocellulose extraction, microorganisms, fermentation products, GC-MS
Play Button
Fabrication of Nano-engineered Transparent Conducting Oxides by Pulsed Laser Deposition
Authors: Paolo Gondoni, Matteo Ghidelli, Fabio Di Fonzo, Andrea Li Bassi, Carlo S. Casari.
Institutions: Politecnico di Milano, Instituto Italiano di Tecnologia.
Nanosecond Pulsed Laser Deposition (PLD) in the presence of a background gas allows the deposition of metal oxides with tunable morphology, structure, density and stoichiometry by a proper control of the plasma plume expansion dynamics. Such versatility can be exploited to produce nanostructured films from compact and dense to nanoporous characterized by a hierarchical assembly of nano-sized clusters. In particular we describe the detailed methodology to fabricate two types of Al-doped ZnO (AZO) films as transparent electrodes in photovoltaic devices: 1) at low O2 pressure, compact films with electrical conductivity and optical transparency close to the state of the art transparent conducting oxides (TCO) can be deposited at room temperature, to be compatible with thermally sensitive materials such as polymers used in organic photovoltaics (OPVs); 2) highly light scattering hierarchical structures resembling a forest of nano-trees are produced at higher pressures. Such structures show high Haze factor (>80%) and may be exploited to enhance the light trapping capability. The method here described for AZO films can be applied to other metal oxides relevant for technological applications such as TiO2, Al2O3, WO3 and Ag4O4.
Materials Science, Issue 72, Physics, Nanotechnology, Nanoengineering, Oxides, thin films, thin film theory, deposition and growth, Pulsed laser Deposition (PLD), Transparent conducting oxides (TCO), Hierarchically organized Nanostructured oxides, Al doped ZnO (AZO) films, enhanced light scattering capability, gases, deposition, nanoporus, nanoparticles, Van der Pauw, scanning electron microscopy, SEM
Play Button
Determination of Microbial Extracellular Enzyme Activity in Waters, Soils, and Sediments using High Throughput Microplate Assays
Authors: Colin R. Jackson, Heather L. Tyler, Justin J. Millar.
Institutions: The University of Mississippi.
Much of the nutrient cycling and carbon processing in natural environments occurs through the activity of extracellular enzymes released by microorganisms. Thus, measurement of the activity of these extracellular enzymes can give insights into the rates of ecosystem level processes, such as organic matter decomposition or nitrogen and phosphorus mineralization. Assays of extracellular enzyme activity in environmental samples typically involve exposing the samples to artificial colorimetric or fluorometric substrates and tracking the rate of substrate hydrolysis. Here we describe microplate based methods for these procedures that allow the analysis of large numbers of samples within a short time frame. Samples are allowed to react with artificial substrates within 96-well microplates or deep well microplate blocks, and enzyme activity is subsequently determined by absorption or fluorescence of the resulting end product using a typical microplate reader or fluorometer. Such high throughput procedures not only facilitate comparisons between spatially separate sites or ecosystems, but also substantially reduce the cost of such assays by reducing overall reagent volumes needed per sample.
Environmental Sciences, Issue 80, Environmental Monitoring, Ecological and Environmental Processes, Environmental Microbiology, Ecology, extracellular enzymes, freshwater microbiology, soil microbiology, microbial activity, enzyme activity
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
Play Button
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Authors: Lisa M. Weatherly, Rachel H. Kennedy, Juyoung Shim, Julie A. Gosse.
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1. Originally published by Naal et al.1, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
Play Button
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Authors: Jennifer L. Koch, David W. Carey.
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga, which creates entry points for infection by one of the Neonectria species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
Play Button
Linking Predation Risk, Herbivore Physiological Stress and Microbial Decomposition of Plant Litter
Authors: Oswald J. Schmitz, Mark A. Bradford, Michael S. Strickland, Dror Hawlena.
Institutions: Yale University, Virginia Tech, The Hebrew University of Jerusalem.
The quantity and quality of detritus entering the soil determines the rate of decomposition by microbial communities as well as recycle rates of nitrogen (N) and carbon (C) sequestration1,2. Plant litter comprises the majority of detritus3, and so it is assumed that decomposition is only marginally influenced by biomass inputs from animals such as herbivores and carnivores4,5. However, carnivores may influence microbial decomposition of plant litter via a chain of interactions in which predation risk alters the physiology of their herbivore prey that in turn alters soil microbial functioning when the herbivore carcasses are decomposed6. A physiological stress response by herbivores to the risk of predation can change the C:N elemental composition of herbivore biomass7,8,9 because stress from predation risk increases herbivore basal energy demands that in nutrient-limited systems forces herbivores to shift their consumption from N-rich resources to support growth and reproduction to C-rich carbohydrate resources to support heightened metabolism6. Herbivores have limited ability to store excess nutrients, so stressed herbivores excrete N as they increase carbohydrate-C consumption7. Ultimately, prey stressed by predation risk increase their body C:N ratio7,10, making them poorer quality resources for the soil microbial pool likely due to lower availability of labile N for microbial enzyme production6. Thus, decomposition of carcasses of stressed herbivores has a priming effect on the functioning of microbial communities that decreases subsequent ability to of microbes to decompose plant litter6,10,11. We present the methodology to evaluate linkages between predation risk and litter decomposition by soil microbes. We describe how to: induce stress in herbivores from predation risk; measure those stress responses, and measure the consequences on microbial decomposition. We use insights from a model grassland ecosystem comprising the hunting spider predator (Pisuarina mira), a dominant grasshopper herbivore (Melanoplus femurrubrum),and a variety of grass and forb plants9.
Environmental Sciences, Issue 73, Microbiology, Plant Biology, Entomology, Organisms, Investigative Techniques, Biological Phenomena, Chemical Phenomena, Metabolic Phenomena, Microbiological Phenomena, Earth Resources and Remote Sensing, Life Sciences (General), Litter Decomposition, Ecological Stoichiometry, Physiological Stress and Ecosystem Function, Predation Risk, Soil Respiration, Carbon Sequestration, Soil Science, respiration, spider, grasshoper, model system
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Play Button
Visualizing Proteins and Macromolecular Complexes by Negative Stain EM: from Grid Preparation to Image Acquisition
Authors: David S. Booth, Agustin Avila-Sakar, Yifan Cheng.
Institutions: University of California San Francisco , University of California San Francisco .
Single particle electron microscopy (EM), of both negative stained or frozen hydrated biological samples, has become a versatile tool in structural biology 1. In recent years, this method has achieved great success in studying structures of proteins and macromolecular complexes 2, 3. Compared with electron cryomicroscopy (cryoEM), in which frozen hydrated protein samples are embedded in a thin layer of vitreous ice 4, negative staining is a simpler sample preparation method in which protein samples are embedded in a thin layer of dried heavy metal salt to increase specimen contrast 5. The enhanced contrast of negative stain EM allows examination of relatively small biological samples. In addition to determining three-dimensional (3D) structure of purified proteins or protein complexes 6, this method can be used for much broader purposes. For example, negative stain EM can be easily used to visualize purified protein samples, obtaining information such as homogeneity/heterogeneity of the sample, formation of protein complexes or large assemblies, or simply to evaluate the quality of a protein preparation. In this video article, we present a complete protocol for using an EM to observe negatively stained protein sample, from preparing carbon coated grids for negative stain EM to acquiring images of negatively stained sample in an electron microscope operated at 120kV accelerating voltage. These protocols have been used in our laboratory routinely and can be easily followed by novice users.
Bioengineering, Issue 58, Electron Microscopy, EM, cryoEM, protein, negative stain, 3D structures
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.