JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Cost-effective mapping of benthic habitats in inland reservoirs through split-beam sonar, indicator kriging, and historical geologic data.
PLoS ONE
PUBLISHED: 01-01-2014
Because bottom substrate composition is an important control on the temporal and spatial location of the aquatic community, accurate maps of benthic habitats of inland lakes and reservoirs provide valuable information to managers, recreational users, and scientists. Therefore, we collected vertical, split-beam sonar data (roughness [E1], hardness [E2], and bathymetry) and sediment samples to make such maps. Statistical calibration between sonar parameters and sediment classes was problematic because the E1:E2 ratios for soft (muck and clay) sediments overlapped a lower and narrower range for hard (gravel) substrates. Thus, we used indicator kriging (IK) to map the probability that unsampled locations did not contain coarse sediments. To overcome the calibration issue we tested proxies for the natural processes and anthropogenic history of the reservoir as potential predictive variables. Of these, a geologic map proved to be the most useful. The central alluvial valley and mudflats contained mainly muck and organic-rich clays. The surrounding glacial till and shale bedrock uplands contained mainly poorly sorted gravels. Anomalies in the sonar data suggested that the organic-rich sediments also contained trapped gases, presenting additional interpretive issues for the mapping. We extended the capability of inexpensive split-beam sonar units through the incorporation of historic geologic maps and other records as well as validation with dredge samples. Through the integration of information from multiple data sets, were able to objectively identify bottom substrate and provide reservoir users with an accurate map of available benthic habitat.
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Published: 03-06-2014
ABSTRACT
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
25 Related JoVE Articles!
Play Button
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Authors: Mackenzie J. Denyes, Michèle A. Parisien, Allison Rutter, Barbara A. Zeeb.
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
52183
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
50961
Play Button
Fine-tuning the Size and Minimizing the Noise of Solid-state Nanopores
Authors: Eric Beamish, Harold Kwok, Vincent Tabard-Cossa, Michel Godin.
Institutions: University of Ottawa, University of Ottawa.
Solid-state nanopores have emerged as a versatile tool for the characterization of single biomolecules such as nucleic acids and proteins1. However, the creation of a nanopore in a thin insulating membrane remains challenging. Fabrication methods involving specialized focused electron beam systems can produce well-defined nanopores, but yield of reliable and low-noise nanopores in commercially available membranes remains low2,3 and size control is nontrivial4,5. Here, the application of high electric fields to fine-tune the size of the nanopore while ensuring optimal low-noise performance is demonstrated. These short pulses of high electric field are used to produce a pristine electrical signal and allow for enlarging of nanopores with subnanometer precision upon prolonged exposure. This method is performed in situ in an aqueous environment using standard laboratory equipment, improving the yield and reproducibility of solid-state nanopore fabrication.
Physics, Issue 80, Nanopore, Solid-State, Size Control, Noise Reduction, Translocation, DNA, High Electric Fields, Nanopore Conditioning
51081
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
Laser-induced Breakdown Spectroscopy: A New Approach for Nanoparticle's Mapping and Quantification in Organ Tissue
Authors: Lucie Sancey, Vincent Motto-Ros, Shady Kotb, Xiaochun Wang, François Lux, Gérard Panczer, Jin Yu, Olivier Tillement.
Institutions: CNRS - Université Lyon 1, CNRS - Université Lyon 1, CNRS - Université Lyon 1.
Emission spectroscopy of laser-induced plasma was applied to elemental analysis of biological samples. Laser-induced breakdown spectroscopy (LIBS) performed on thin sections of rodent tissues: kidneys and tumor, allows the detection of inorganic elements such as (i) Na, Ca, Cu, Mg, P, and Fe, naturally present in the body and (ii) Si and Gd, detected after the injection of gadolinium-based nanoparticles. The animals were euthanized 1 to 24 hr after intravenous injection of particles. A two-dimensional scan of the sample, performed using a motorized micrometric 3D-stage, allowed the infrared laser beam exploring the surface with a lateral resolution less than 100 μm. Quantitative chemical images of Gd element inside the organ were obtained with sub-mM sensitivity. LIBS offers a simple and robust method to study the distribution of inorganic materials without any specific labeling. Moreover, the compatibility of the setup with standard optical microscopy emphasizes its potential to provide multiple images of the same biological tissue with different types of response: elemental, molecular, or cellular.
Physics, Issue 88, Microtechnology, Nanotechnology, Tissues, Diagnosis, Inorganic Chemistry, Organic Chemistry, Physical Chemistry, Plasma Physics, laser-induced breakdown spectroscopy, nanoparticles, elemental mapping, chemical images of organ tissue, quantification, biomedical measurement, laser-induced plasma, spectrochemical analysis, tissue mapping
51353
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
51540
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Characterizing the Composition of Molecular Motors on Moving Axonal Cargo Using "Cargo Mapping" Analysis
Authors: Sylvia Neumann, George E. Campbell, Lukasz Szpankowski, Lawrence S.B. Goldstein, Sandra E. Encalada.
Institutions: The Scripps Research Institute, University of California San Diego, University of California San Diego, University of California San Diego School of Medicine.
Understanding the mechanisms by which molecular motors coordinate their activities to transport vesicular cargoes within neurons requires the quantitative analysis of motor/cargo associations at the single vesicle level. The goal of this protocol is to use quantitative fluorescence microscopy to correlate (“map”) the position and directionality of movement of live cargo to the composition and relative amounts of motors associated with the same cargo. “Cargo mapping” consists of live imaging of fluorescently labeled cargoes moving in axons cultured on microfluidic devices, followed by chemical fixation during recording of live movement, and subsequent immunofluorescence (IF) staining of the exact same axonal regions with antibodies against motors. Colocalization between cargoes and their associated motors is assessed by assigning sub-pixel position coordinates to motor and cargo channels, by fitting Gaussian functions to the diffraction-limited point spread functions representing individual fluorescent point sources. Fixed cargo and motor images are subsequently superimposed to plots of cargo movement, to “map” them to their tracked trajectories. The strength of this protocol is the combination of live and IF data to record both the transport of vesicular cargoes in live cells and to determine the motors associated to these exact same vesicles. This technique overcomes previous challenges that use biochemical methods to determine the average motor composition of purified heterogeneous bulk vesicle populations, as these methods do not reveal compositions on single moving cargoes. Furthermore, this protocol can be adapted for the analysis of other transport and/or trafficking pathways in other cell types to correlate the movement of individual intracellular structures with their protein composition. Limitations of this protocol are the relatively low throughput due to low transfection efficiencies of cultured primary neurons and a limited field of view available for high-resolution imaging. Future applications could include methods to increase the number of neurons expressing fluorescently labeled cargoes.
Neuroscience, Issue 92, kinesin, dynein, single vesicle, axonal transport, microfluidic devices, primary hippocampal neurons, quantitative fluorescence microscopy
52029
Play Button
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Authors: Andreas Florian Haas, Ben Knowles, Yan Wei Lim, Tracey McDole Somera, Linda Wegley Kelly, Mark Hatay, Forest Rohwer.
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
52131
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
50564
Play Button
High-Throughput Measurement and Classification of Organic P in Environmental Samples
Authors: Nicholas R. Johnson, Jane E. Hill.
Institutions: University of Vermont.
Many types of organic phosphorus (P) molecules exist in environmental samples1. Traditional P measurements do not detect these organic P compounds since they do not react with colorimetric reagents2,3. Enzymatic hydrolysis (EH) is an emerging method for accurately characterizing organic P forms in environmental samples4,5. This method is only trumped in accuracy by Phosphorus-31 Nuclear Magnetic Resonance Spectroscopy (31P-NMR) -a method that is expensive and requires specialized technical training6. We have adapted an enzymatic hydrolysis method capable of measuring three classes of phosphorus (monoester P, diester P and inorganic P) to a microplate reader system7. This method provides researchers with a fast, accurate, affordable and user-friendly means to measure P species in soils, sediments, manures and, if concentrated, aquatic samples. This is the only high-throughput method for measuring the forms and enzyme-lability of organic P that can be performed in a standard laboratory. The resulting data provides insight to scientists studying system nutrient content and eutrophication potential.
Microbiology, Issue 52, phosphorus, enzymatic-hydrolysis, soil, manure, phosphatase, phytic acid, NaOH-EDTA, organophosphates, molybdate, organic P
2660
Play Button
Constructing a Low-budget Laser Axotomy System to Study Axon Regeneration in C. elegans
Authors: Wes Williams, Paola Nix, Michael Bastiani.
Institutions: University of Utah.
Laser axotomy followed by time-lapse microscopy is a sensitive assay for axon regeneration phenotypes in C. elegans1. The main difficulty of this assay is the perceived cost ($25-100K) and technical expertise required for implementing a laser ablation system2,3. However, solid-state pulse lasers of modest costs (<$10K) can provide robust performance for laser ablation in transparent preparations where target axons are "close" to the tissue surface. Construction and alignment of a system can be accomplished in a day. The optical path provided by light from the focused condenser to the ablation laser provides a convenient alignment guide. An intermediate module with all optics removed can be dedicated to the ablation laser and assures that no optical elements need be moved during a laser ablation session. A dichroic in the intermediate module allows simultaneous imaging and laser ablation. Centering the laser beam to the outgoing beam from the focused microscope condenser lens guides the initial alignment of the system. A variety of lenses are used to condition and expand the laser beam to fill the back aperture of the chosen objective lens. Final alignment and testing is performed with a front surface mirrored glass slide target. Laser power is adjusted to give a minimum size ablation spot (<1um). The ablation spot is centered with fine adjustments of the last kinematically mounted mirror to cross hairs fixed in the imaging window. Laser power for axotomy will be approximately 10X higher than needed for the minimum ablation spot on the target slide (this may vary with the target you use). Worms can be immobilized for laser axotomy and time-lapse imaging by mounting on agarose pads (or in microfluidic chambers4). Agarose pads are easily made with 10% agarose in balanced saline melted in a microwave. A drop of molten agarose is placed on a glass slide and flattened with another glass slide into a pad approximately 200 um thick (a single layer of time tape on adjacent slides is used as a spacer). A "Sharpie" cap is used to cut out a uniformed diameter circular pad of 13mm. Anesthetic (1ul Muscimol 20mM) and Microspheres (Chris Fang-Yen personal communication) (1ul 2.65% Polystyrene 0.1 um in water) are added to the center of the pad followed by 3-5 worms oriented so they are lying on their left sides. A glass coverslip is applied and then Vaseline is used to seal the coverslip and prevent evaporation of the sample.
Neuroscience, Issue 57, laser axotomy, regeneration, growth cone, time lapse, C. elegans, neuroscience, Nd:Yag laser
3331
Play Button
Stretching Short Sequences of DNA with Constant Force Axial Optical Tweezers
Authors: Krishnan Raghunathan, Joshua N. Milstein, Jens -Christian Meiners.
Institutions: University of Michigan , University of Michigan .
Single-molecule techniques for stretching DNA of contour lengths less than a kilobase are fraught with experimental difficulties. However, many interesting biological events such as histone binding and protein-mediated looping of DNA1,2, occur on this length scale. In recent years, the mechanical properties of DNA have been shown to play a significant role in fundamental cellular processes like the packaging of DNA into compact nucleosomes and chromatin fibers3,4. Clearly, it is then important to understand the mechanical properties of short stretches of DNA. In this paper, we provide a practical guide to a single-molecule optical tweezing technique that we have developed to study the mechanical behavior of DNA with contour lengths as short as a few hundred basepairs. The major hurdle in stretching short segments of DNA is that conventional optical tweezers are generally designed to apply force in a direction lateral to the stage5,6, (see Fig. 1). In this geometry, the angle between the bead and the coverslip, to which the DNA is tethered, becomes very steep for submicron length DNA. The axial position must now be accounted for, which can be a challenge, and, since the extension drags the microsphere closer to the coverslip, steric effects are enhanced. Furthermore, as a result of the asymmetry of the microspheres, lateral extensions will generate varying levels of torque due to rotation of the microsphere within the optical trap since the direction of the reactive force changes during the extension. Alternate methods for stretching submicron DNA run up against their own unique hurdles. For instance, a dual-beam optical trap is limited to stretching DNA of around a wavelength, at which point interference effects between the two traps and from light scattering between the microspheres begin to pose a significant problem. Replacing one of the traps with a micropipette would most likely suffer from similar challenges. While one could directly use the axial potential to stretch the DNA, an active feedback scheme would be needed to apply a constant force and the bandwidth of this will be quite limited, especially at low forces. We circumvent these fundamental problems by directly pulling the DNA away from the coverslip by using a constant force axial optical tweezers7,8. This is achieved by trapping the bead in a linear region of the optical potential, where the optical force is constant-the strength of which can be tuned by adjusting the laser power. Trapping within the linear region also serves as an all optical force-clamp on the DNA that extends for nearly 350 nm in the axial direction. We simultaneously compensate for thermal and mechanical drift by finely adjusting the position of the stage so that a reference microsphere stuck to the coverslip remains at the same position and focus, allowing for a virtually limitless observation period.
Bioengineering, Issue 56, Genetics, DNA stretching, DNA, Axial Optical Tweezers, Single-Molecule Biophysics, Biophysics
3405
Play Button
Regular Care and Maintenance of a Zebrafish (Danio rerio) Laboratory: An Introduction
Authors: Avdesh Avdesh, Mengqi Chen, Mathew T. Martin-Iverson, Alinda Mondal, Daniel Ong, Stephanie Rainey-Smith, Kevin Taddei, Michael Lardelli, David M. Groth, Giuseppe Verdile, Ralph N. Martins.
Institutions: Edith Cowan University, Graylands Hospital, University of Western Australia, McCusker Alzheimer's Research foundation, University of Western Australia , University of Adelaide, Curtin University of Technology, University of Western Australia .
This protocol describes regular care and maintenance of a zebrafish laboratory. Zebrafish are now gaining popularity in genetics, pharmacological and behavioural research. As a vertebrate, zebrafish share considerable genetic sequence similarity with humans and are being used as an animal model for various human disease conditions. The advantages of zebrafish in comparison to other common vertebrate models include high fecundity, low maintenance cost, transparent embryos, and rapid development. Due to the spur of interest in zebrafish research, the need to establish and maintain a productive zebrafish housing facility is also increasing. Although literature is available for the maintenance of a zebrafish laboratory, a concise video protocol is lacking. This video illustrates the protocol for regular housing, feeding, breeding and raising of zebrafish larvae. This process will help researchers to understand the natural behaviour and optimal conditions of zebrafish husbandry and hence troubleshoot experimental issues that originate from the fish husbandry conditions. This protocol will be of immense help to researchers planning to establish a zebrafish laboratory, and also to graduate students who are intending to use zebrafish as an animal model.
Basic Protocols, Issue 69, Biology, Marine Biology, Zebrafish, Danio rerio, maintenance, breeding, feeding, raising, larvae, animal model, aquarium
4196
Play Button
Collection, Isolation and Enrichment of Naturally Occurring Magnetotactic Bacteria from the Environment
Authors: Zachery Oestreicher, Steven K. Lower, Wei Lin, Brian H. Lower.
Institutions: The Ohio State University, The Ohio State University, Chinese Academy of Sciences .
Magnetotactic bacteria (MTB) are aquatic microorganisms that were first notably described in 19751 from sediment samples collected in salt marshes of Massachusetts (USA). Since then MTB have been discovered in stratified water- and sediment-columns from all over the world2. One feature common to all MTB is that they contain magnetosomes, which are intracellular, membrane-bound magnetic nanocrystals of magnetite (Fe3O4) and/or greigite (Fe3S4) or both3, 4. In the Northern hemisphere, MTB are typically attracted to the south end of a bar magnet, while in the Southern hemisphere they are usually attracted to the north end of a magnet3,5. This property can be exploited when trying to isolate MTB from environmental samples. One of the most common ways to enrich MTB is to use a clear plastic container to collect sediment and water from a natural source, such as a freshwater pond. In the Northern hemisphere, the south end of a bar magnet is placed against the outside of the container just above the sediment at the sediment-water interface. After some time, the bacteria can be removed from the inside of the container near the magnet with a pipette and then enriched further by using a capillary racetrack6 and a magnet. Once enriched, the bacteria can be placed on a microscope slide using a hanging drop method and observed in a light microscope or deposited onto a copper grid and observed using transmission electron microscopy (TEM). Using this method, isolated MTB may be studied microscopically to determine characteristics such as swimming behavior, type and number of flagella, cell morphology of the cells, shape of the magnetic crystals, number of magnetosomes, number of magnetosome chains in each cell, composition of the nanomineral crystals, and presence of intracellular vacuoles.
Microbiology, Issue 69, Cellular Biology, Earth Sciences, Environmental Sciences, Geology, Magnetotactic bacteria, MTB, bacteria enrichment, racetrack, bacteria isolation, magnetosome, magnetite, hanging drop, magnetism, magnetospirillum, transmission electron microscopy, TEM, light microscopy, pond water, sediment
50123
Play Button
Characterization of Surface Modifications by White Light Interferometry: Applications in Ion Sputtering, Laser Ablation, and Tribology Experiments
Authors: Sergey V. Baryshev, Robert A. Erck, Jerry F. Moore, Alexander V. Zinovev, C. Emil Tripa, Igor V. Veryovkin.
Institutions: Argonne National Laboratory, Argonne National Laboratory, MassThink LLC.
In materials science and engineering it is often necessary to obtain quantitative measurements of surface topography with micrometer lateral resolution. From the measured surface, 3D topographic maps can be subsequently analyzed using a variety of software packages to extract the information that is needed. In this article we describe how white light interferometry, and optical profilometry (OP) in general, combined with generic surface analysis software, can be used for materials science and engineering tasks. In this article, a number of applications of white light interferometry for investigation of surface modifications in mass spectrometry, and wear phenomena in tribology and lubrication are demonstrated. We characterize the products of the interaction of semiconductors and metals with energetic ions (sputtering), and laser irradiation (ablation), as well as ex situ measurements of wear of tribological test specimens. Specifically, we will discuss: Aspects of traditional ion sputtering-based mass spectrometry such as sputtering rates/yields measurements on Si and Cu and subsequent time-to-depth conversion. Results of quantitative characterization of the interaction of femtosecond laser irradiation with a semiconductor surface. These results are important for applications such as ablation mass spectrometry, where the quantities of evaporated material can be studied and controlled via pulse duration and energy per pulse. Thus, by determining the crater geometry one can define depth and lateral resolution versus experimental setup conditions. Measurements of surface roughness parameters in two dimensions, and quantitative measurements of the surface wear that occur as a result of friction and wear tests. Some inherent drawbacks, possible artifacts, and uncertainty assessments of the white light interferometry approach will be discussed and explained.
Materials Science, Issue 72, Physics, Ion Beams (nuclear interactions), Light Reflection, Optical Properties, Semiconductor Materials, White Light Interferometry, Ion Sputtering, Laser Ablation, Femtosecond Lasers, Depth Profiling, Time-of-flight Mass Spectrometry, Tribology, Wear Analysis, Optical Profilometry, wear, friction, atomic force microscopy, AFM, scanning electron microscopy, SEM, imaging, visualization
50260
Play Button
Concurrent Quantitative Conductivity and Mechanical Properties Measurements of Organic Photovoltaic Materials using AFM
Authors: Maxim P. Nikiforov, Seth B. Darling.
Institutions: Argonne National Laboratory, University of Chicago.
Organic photovoltaic (OPV) materials are inherently inhomogeneous at the nanometer scale. Nanoscale inhomogeneity of OPV materials affects performance of photovoltaic devices. Thus, understanding of spatial variations in composition as well as electrical properties of OPV materials is of paramount importance for moving PV technology forward.1,2 In this paper, we describe a protocol for quantitative measurements of electrical and mechanical properties of OPV materials with sub-100 nm resolution. Currently, materials properties measurements performed using commercially available AFM-based techniques (PeakForce, conductive AFM) generally provide only qualitative information. The values for resistance as well as Young's modulus measured using our method on the prototypical ITO/PEDOT:PSS/P3HT:PC61BM system correspond well with literature data. The P3HT:PC61BM blend separates onto PC61BM-rich and P3HT-rich domains. Mechanical properties of PC61BM-rich and P3HT-rich domains are different, which allows for domain attribution on the surface of the film. Importantly, combining mechanical and electrical data allows for correlation of the domain structure on the surface of the film with electrical properties variation measured through the thickness of the film.
Materials Science, Issue 71, Nanotechnology, Mechanical Engineering, Electrical Engineering, Computer Science, Physics, electrical transport properties in solids, condensed matter physics, thin films (theory, deposition and growth), conductivity (solid state), AFM, atomic force microscopy, electrical properties, mechanical properties, organic photovoltaics, microengineering, photovoltaics
50293
Play Button
In vivo Neuronal Calcium Imaging in C. elegans
Authors: Samuel H. Chung, Lin Sun, Christopher V. Gabel.
Institutions: Boston University School of Medicine, Boston University Photonics Center.
The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon 1 and GCaMP 2 allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.
Developmental Biology, Issue 74, Physiology, Biophysics, Neurobiology, Cellular Biology, Molecular Biology, Anatomy, Developmental Biology, Biomedical Engineering, Medicine, Caenorhabditis elegans, C. elegans, Microscopy, Fluorescence, Neurosciences, calcium imaging, genetically encoded calcium indicators, cameleon, GCaMP, neuronal activity, time-lapse imaging, laser ablation, optical neurophysiology, neurophysiology, neurons, animal model
50357
Play Button
Determination of Microbial Extracellular Enzyme Activity in Waters, Soils, and Sediments using High Throughput Microplate Assays
Authors: Colin R. Jackson, Heather L. Tyler, Justin J. Millar.
Institutions: The University of Mississippi.
Much of the nutrient cycling and carbon processing in natural environments occurs through the activity of extracellular enzymes released by microorganisms. Thus, measurement of the activity of these extracellular enzymes can give insights into the rates of ecosystem level processes, such as organic matter decomposition or nitrogen and phosphorus mineralization. Assays of extracellular enzyme activity in environmental samples typically involve exposing the samples to artificial colorimetric or fluorometric substrates and tracking the rate of substrate hydrolysis. Here we describe microplate based methods for these procedures that allow the analysis of large numbers of samples within a short time frame. Samples are allowed to react with artificial substrates within 96-well microplates or deep well microplate blocks, and enzyme activity is subsequently determined by absorption or fluorescence of the resulting end product using a typical microplate reader or fluorometer. Such high throughput procedures not only facilitate comparisons between spatially separate sites or ecosystems, but also substantially reduce the cost of such assays by reducing overall reagent volumes needed per sample.
Environmental Sciences, Issue 80, Environmental Monitoring, Ecological and Environmental Processes, Environmental Microbiology, Ecology, extracellular enzymes, freshwater microbiology, soil microbiology, microbial activity, enzyme activity
50399
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
4009
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Quantitatively Measuring In situ Flows using a Self-Contained Underwater Velocimetry Apparatus (SCUVA)
Authors: Kakani Katija, Sean P. Colin, John H. Costello, John O. Dabiri.
Institutions: Woods Hole Oceanographic Institution, Roger Williams University, Whitman Center, Providence College, California Institute of Technology.
The ability to directly measure velocity fields in a fluid environment is necessary to provide empirical data for studies in fields as diverse as oceanography, ecology, biology, and fluid mechanics. Field measurements introduce practical challenges such as environmental conditions, animal availability, and the need for field-compatible measurement techniques. To avoid these challenges, scientists typically use controlled laboratory environments to study animal-fluid interactions. However, it is reasonable to question whether one can extrapolate natural behavior (i.e., that which occurs in the field) from laboratory measurements. Therefore, in situ quantitative flow measurements are needed to accurately describe animal swimming in their natural environment. We designed a self-contained, portable device that operates independent of any connection to the surface, and can provide quantitative measurements of the flow field surrounding an animal. This apparatus, a self-contained underwater velocimetry apparatus (SCUVA), can be operated by a single scuba diver in depths up to 40 m. Due to the added complexity inherent of field conditions, additional considerations and preparation are required when compared to laboratory measurements. These considerations include, but are not limited to, operator motion, predicting position of swimming targets, available natural suspended particulate, and orientation of SCUVA relative to the flow of interest. The following protocol is intended to address these common field challenges and to maximize measurement success.
Bioengineering, Issue 56, In situ DPIV, SCUVA, animal flow measurements, zooplankton, propulsion
2615
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.