Inhalation is the most likely exposure route for individuals working with aerosolizable engineered nano-materials (ENM). To properly perform nanoparticle inhalation toxicology studies, the aerosols in a chamber housing the experimental animals must have: 1) a steady concentration maintained at a desired level for the entire exposure period; 2) a homogenous composition free of contaminants; and 3) a stable size distribution with a geometric mean diameter < 200 nm and a geometric standard deviation σg < 2.5 5. The generation of aerosols containing nanoparticles is quite challenging because nanoparticles easily agglomerate. This is largely due to very strong inter-particle forces and the formation of large fractal structures in tens or hundreds of microns in size 6, which are difficult to be broken up. Several common aerosol generators, including nebulizers, fluidized beds, Venturi aspirators and the Wright dust feed, were tested; however, none were able to produce nanoparticle aerosols which satisfy all criteria 5.
A whole-body nanoparticle aerosol inhalation exposure system was fabricated, validated and utilized for nano-TiO2 inhalation toxicology studies. Critical components: 1) novel nano-TiO2 aerosol generator; 2) 0.5 m3 whole-body inhalation exposure chamber; and 3) monitor and control system. Nano-TiO2 aerosols generated from bulk dry nano-TiO2 powders (primary diameter of 21 nm, bulk density of 3.8 g/cm3) were delivered into the exposure chamber at a flow rate of 90 LPM (10.8 air changes/hr). Particle size distribution and mass concentration profiles were measured continuously with a scanning mobility particle sizer (SMPS), and an electric low pressure impactor (ELPI). The aerosol mass concentration (C) was verified gravimetrically (mg/m3). The mass (M) of the collected particles was determined as M = (Mpost-Mpre), where Mpre and Mpost are masses of the filter before and after sampling (mg). The mass concentration was calculated as C = M/(Q*t), where Q is sampling flowrate (m3/min), and t is the sampling time (minute). The chamber pressure, temperature, relative humidity (RH), O2 and CO2 concentrations were monitored and controlled continuously. Nano-TiO2 aerosols collected on Nuclepore filters were analyzed with a scanning electron microscope (SEM) and energy dispersive X-ray (EDX) analysis.
In summary, we report that the nano-particle aerosols generated and delivered to our exposure chamber have: 1) steady mass concentration; 2) homogenous composition free of contaminants; 3) stable particle size distributions with a count-median aerodynamic diameter of 157 nm during aerosol generation. This system reliably and repeatedly creates test atmospheres that simulate occupational, environmental or domestic ENM aerosol exposures.
24 Related JoVE Articles!
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g.
carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e.
C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample).
Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e.
colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ
soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro
replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
Activating Molecules, Ions, and Solid Particles with Acoustic Cavitation
Institutions: UMR 5257 CEA-CNRS-UM2-ENSCM.
The chemical and physical effects of ultrasound arise not from a direct interaction of molecules with sound waves, but rather from the acoustic cavitation: the nucleation, growth, and implosive collapse of microbubbles in liquids submitted to power ultrasound. The violent implosion of bubbles leads to the formation of chemically reactive species and to the emission of light, named sonoluminescence. In this manuscript, we describe the techniques allowing study of extreme intrabubble conditions and chemical reactivity of acoustic cavitation in solutions. The analysis of sonoluminescence spectra of water sparged with noble gases provides evidence for nonequilibrium plasma formation. The photons and the "hot" particles generated by cavitation bubbles enable to excite the non-volatile species in solutions increasing their chemical reactivity. For example the mechanism of ultrabright sonoluminescence of uranyl ions in acidic solutions varies with uranium concentration: sonophotoluminescence dominates in diluted solutions, and collisional excitation contributes at higher uranium concentration. Secondary sonochemical products may arise from chemically active species that are formed inside the bubble, but then diffuse into the liquid phase and react with solution precursors to form a variety of products. For instance, the sonochemical reduction of Pt(IV) in pure water provides an innovative synthetic route for monodispersed nanoparticles of metallic platinum without any templates or capping agents. Many studies reveal the advantages of ultrasound to activate the divided solids. In general, the mechanical effects of ultrasound strongly contribute in heterogeneous systems in addition to chemical effects. In particular, the sonolysis of PuO2
powder in pure water yields stable colloids of plutonium due to both effects.
Chemistry, Issue 86, Sonochemistry, sonoluminescence, ultrasound, cavitation, nanoparticles, actinides, colloids, nanocolloids
Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity
Institutions: Case Western Reserve University.
Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition.
Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ
measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration.
Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release.
The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Expression of Recombinant Cellulase Cel5A from Trichoderma reesei in Tobacco Plants
Institutions: RWTH Aachen University, Fraunhofer Institute for Molecular Biology and Applied Ecology.
Cellulose degrading enzymes, cellulases, are targets of both research and industrial interests. The preponderance of these enzymes in difficult-to-culture organisms, such as hyphae-building fungi and anaerobic bacteria, has hastened the use of recombinant technologies in this field. Plant expression methods are a desirable system for large-scale production of enzymes and other industrially useful proteins. Herein, methods for the transient expression of a fungal endoglucanase, Trichoderma reesei
Cel5A, in Nicotiana tabacum
are demonstrated. Successful protein expression is shown, monitored by fluorescence using an mCherry-enzyme fusion protein. Additionally, a set of basic tests are used to examine the activity of transiently expressed T. reesei
Cel5A, including SDS-PAGE, Western blotting, zymography, as well as fluorescence and dye-based substrate degradation assays. The system described here can be used to produce an active cellulase in a short time period, so as to assess the potential for further production in plants through constitutive or inducible expression systems.
Environmental Sciences, Issue 88, heterologous expression, endoplasmic reticulum, endoglucanase, cellulose, glycosyl-hydrolase, fluorescence, cellulase, Trichoderma reesei, tobacco plants
Coherent anti-Stokes Raman Scattering (CARS) Microscopy Visualizes Pharmaceutical Tablets During Dissolution
Institutions: University of Twente, Heinrich-Heine University, University of Helsinki.
Traditional pharmaceutical dissolution tests determine the amount of drug dissolved over time by measuring drug content in the dissolution medium. This method provides little direct information about what is happening on the surface of the dissolving tablet. As the tablet surface composition and structure can change during dissolution, it is essential to monitor it during dissolution testing. In this work coherent anti-Stokes Raman scattering microscopy is used to image the surface of tablets during dissolution while UV absorption spectroscopy is simultaneously providing inline analysis of dissolved drug concentration for tablets containing a 50% mixture of theophylline anhydrate and ethyl cellulose. The measurements showed that in situ
CARS microscopy is capable of imaging selectively theophylline in the presence of ethyl cellulose. Additionally, the theophylline anhydrate converted to theophylline monohydrate during dissolution, with needle-shaped crystals growing on the tablet surface during dissolution. The conversion of theophylline anhydrate to monohydrate, combined with reduced exposure of the drug to the flowing dissolution medium resulted in decreased dissolution rates. Our results show that in situ
CARS microscopy combined with inline UV absorption spectroscopy is capable of monitoring pharmaceutical tablet dissolution and correlating surface changes with changes in dissolution rate.
Physics, Issue 89, Coherent anti-Stokes Raman scattering, microscopy, pharmaceutics, dissolution, in situ analysis, theophylline, tablet
High Efficiency Differentiation of Human Pluripotent Stem Cells to Cardiomyocytes and Characterization by Flow Cytometry
Institutions: Medical College of Wisconsin, Stanford University School of Medicine, Medical College of Wisconsin, Hong Kong University, Johns Hopkins University School of Medicine, Medical College of Wisconsin.
There is an urgent need to develop approaches for repairing the damaged heart, discovering new therapeutic drugs that do not have toxic effects on the heart, and improving strategies to accurately model heart disease. The potential of exploiting human induced pluripotent stem cell (hiPSC) technology to generate cardiac muscle “in a dish” for these applications continues to generate high enthusiasm. In recent years, the ability to efficiently generate cardiomyogenic cells from human pluripotent stem cells (hPSCs) has greatly improved, offering us new opportunities to model very early stages of human cardiac development not otherwise accessible. In contrast to many previous methods, the cardiomyocyte differentiation protocol described here does not require cell aggregation or the addition of Activin A or BMP4 and robustly generates cultures of cells that are highly positive for cardiac troponin I and T (TNNI3, TNNT2), iroquois-class homeodomain protein IRX-4 (IRX4), myosin regulatory light chain 2, ventricular/cardiac muscle isoform (MLC2v) and myosin regulatory light chain 2, atrial isoform (MLC2a) by day 10 across all human embryonic stem cell (hESC) and hiPSC lines tested to date. Cells can be passaged and maintained for more than 90 days in culture. The strategy is technically simple to implement and cost-effective. Characterization of cardiomyocytes derived from pluripotent cells often includes the analysis of reference markers, both at the mRNA and protein level. For protein analysis, flow cytometry is a powerful analytical tool for assessing quality of cells in culture and determining subpopulation homogeneity. However, technical variation in sample preparation can significantly affect quality of flow cytometry data. Thus, standardization of staining protocols should facilitate comparisons among various differentiation strategies. Accordingly, optimized staining protocols for the analysis of IRX4, MLC2v, MLC2a, TNNI3, and TNNT2 by flow cytometry are described.
Cellular Biology, Issue 91, human induced pluripotent stem cell, flow cytometry, directed differentiation, cardiomyocyte, IRX4, TNNI3, TNNT2, MCL2v, MLC2a
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Models and Methods to Evaluate Transport of Drug Delivery Systems Across Cellular Barriers
Institutions: University of Maryland, University of Maryland.
Sub-micrometer carriers (nanocarriers; NCs) enhance efficacy of drugs by improving solubility, stability, circulation time, targeting, and release. Additionally, traversing cellular barriers in the body is crucial for both oral delivery of therapeutic NCs into the circulation and transport from the blood into tissues, where intervention is needed. NC transport across cellular barriers is achieved by: (i) the paracellular route, via transient disruption of the junctions that interlock adjacent cells, or (ii) the transcellular route, where materials are internalized by endocytosis, transported across the cell body, and secreted at the opposite cell surface (transyctosis). Delivery across cellular barriers can be facilitated by coupling therapeutics or their carriers with targeting agents that bind specifically to cell-surface markers involved in transport. Here, we provide methods to measure the extent and mechanism of NC transport across a model cell barrier, which consists of a monolayer of gastrointestinal (GI) epithelial cells grown on a porous membrane located in a transwell insert. Formation of a permeability barrier is confirmed by measuring transepithelial electrical resistance (TEER), transepithelial transport of a control substance, and immunostaining of tight junctions. As an example, ~200 nm polymer NCs are used, which carry a therapeutic cargo and are coated with an antibody that targets a cell-surface determinant. The antibody or therapeutic cargo is labeled with 125
I for radioisotope tracing and labeled NCs are added to the upper chamber over the cell monolayer for varying periods of time. NCs associated to the cells and/or transported to the underlying chamber can be detected. Measurement of free 125
I allows subtraction of the degraded fraction. The paracellular route is assessed by determining potential changes caused by NC transport to the barrier parameters described above. Transcellular transport is determined by addressing the effect of modulating endocytosis and transcytosis pathways.
Bioengineering, Issue 80, Antigens, Enzymes, Biological Therapy, bioengineering (general), Pharmaceutical Preparations, Macromolecular Substances, Therapeutics, Digestive System and Oral Physiological Phenomena, Biological Phenomena, Cell Physiological Phenomena, drug delivery systems, targeted nanocarriers, transcellular transport, epithelial cells, tight junctions, transepithelial electrical resistance, endocytosis, transcytosis, radioisotope tracing, immunostaining
Laser Microdissection Applied to Gene Expression Profiling of Subset of Cells from the Drosophila Wing Disc
Institutions: University of Naples.
Heterogeneous nature of tissues has proven to be a limiting factor in the amount of information that can be generated from biological samples, compromising downstream analyses. Considering the complex and dynamic cellular associations existing within many tissues, in order to recapitulate the in vivo
interactions thorough molecular analysis one must be able to analyze specific cell populations within their native context. Laser-mediated microdissection can achieve this goal, allowing unambiguous identification and successful harvest of cells of interest under direct microscopic visualization while maintaining molecular integrity. We have applied this technology to analyse gene expression within defined areas of the developing Drosophila
wing disc, which represents an advantageous model system to study growth control, cell differentiation and organogenesis. Larval imaginal discs are precociously subdivided into anterior and posterior, dorsal and ventral compartments by lineage restriction boundaries. Making use of the inducible GAL4-UAS binary expression system, each of these compartments can be specifically labelled in transgenic flies expressing an UAS-GFP transgene under the control of the appropriate GAL4-driver construct. In the transgenic discs, gene expression profiling of discrete subsets of cells can precisely be determined after laser-mediated microdissection, using the fluorescent GFP signal to guide laser cut.
Among the variety of downstream applications, we focused on RNA transcript profiling after localised RNA interference (RNAi). With the advent of RNAi technology, GFP labelling can be coupled with localised knockdown of a given gene, allowing to determinate the transcriptional response of a discrete cell population to the specific gene silencing. To validate this approach, we dissected equivalent areas of the disc from the posterior (labelled by GFP expression), and the anterior (unlabelled) compartment upon regional silencing in the P compartment of an otherwise ubiquitously expressed gene. RNA was extracted from microdissected silenced and unsilenced areas and comparative gene expression profiling determined by quantitative real-time RT-PCR. We show that this method can effectively be applied for accurate transcriptomics of subsets of cells within the Drosophila
imaginal discs. Indeed, while massive disc preparation as source of RNA generally assumes cell homogeneity, it is well known that transcriptional expression can vary greatly within these structures in consequence of positional information. Using localized fluorescent GFP signal to guide laser cut, more accurate transcriptional analyses can be performed and profitably applied to disparate applications, including transcript profiling of distinct cell lineages within their native context.
Developmental Biology, Issue 38, Drosophila, Imaginal discs, Laser microdissection, Gene expression, Transcription profiling, Regulatory pathways , in vivo RNAi, GAL4-UAS, GFP labelling, Positional information
Differential Imaging of Biological Structures with Doubly-resonant Coherent Anti-stokes Raman Scattering (CARS)
Institutions: University of California, Davis, University of California, Davis.
Coherent Raman imaging techniques have seen a dramatic increase in activity over the past decade due to their promise to enable label-free optical imaging with high molecular specificity 1
. The sensitivity of these techniques, however, is many orders of magnitude weaker than fluorescence, requiring milli-molar molecular concentrations 1,2
. Here, we describe a technique that can enable the detection of weak or low concentrations of Raman-active molecules by amplifying their signal with that obtained from strong or abundant Raman scatterers. The interaction of short pulsed lasers in a biological sample generates a variety of coherent Raman scattering signals, each of which carry unique chemical information about the sample. Typically, only one of these signals, e.g. Coherent Anti-stokes Raman scattering (CARS), is used to generate an image while the others are discarded. However, when these other signals, including 3-color CARS and four-wave mixing (FWM), are collected and compared to the CARS signal, otherwise difficult to detect information can be extracted 3
. For example, doubly-resonant CARS (DR-CARS) is the result of the constructive interference between two resonant signals 4
. We demonstrate how tuning of the three lasers required to produce DR-CARS signals to the 2845 cm-1
CH stretch vibration in lipids and the 2120 cm-1
CD stretching vibration of a deuterated molecule (e.g. deuterated sugars, fatty acids, etc.) can be utilized to probe both Raman resonances simultaneously. Under these conditions, in addition to CARS signals from each resonance, a combined DR-CARS signal probing both is also generated. We demonstrate how detecting the difference between the DR-CARS signal and the amplifying signal from an abundant molecule's vibration can be used to enhance the sensitivity for the weaker signal. We further demonstrate that this approach even extends to applications where both signals are generated from different molecules, such that e.g. using the strong Raman signal of a solvent can enhance the weak Raman signal of a dilute solute.
Cellular Biology, Issue 44, Raman scattering, Four-wave mixing, Coherent anti-Stokes Raman scattering, Microscopy, Coherent Raman Scattering
Aseptic Laboratory Techniques: Plating Methods
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories
(BMBL) as well as Material Safety Data Sheets
(MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection
(ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to:
● Perform plating procedures without contaminating media.
● Isolate single bacterial colonies by the streak-plating method.
● Use pour-plating and spread-plating methods to determine the concentration of bacteria.
● Perform soft agar overlays when working with phage.
● Transfer bacterial cells from one plate to another using the replica-plating procedure.
● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Video-rate Scanning Confocal Microscopy and Microendoscopy
Institutions: Harvard University , Harvard-MIT, Harvard Medical School.
Confocal microscopy has become an invaluable tool in biology and the biomedical sciences, enabling rapid, high-sensitivity, and high-resolution optical sectioning of complex systems. Confocal microscopy is routinely used, for example, to study specific cellular targets1
, monitor dynamics in living cells2-4
, and visualize the three dimensional evolution of entire organisms5,6
. Extensions of confocal imaging systems, such as confocal microendoscopes, allow for high-resolution imaging in vivo7
and are currently being applied to disease imaging and diagnosis in clinical settings8,9
Confocal microscopy provides three-dimensional resolution by creating so-called "optical sections" using straightforward geometrical optics. In a standard wide-field microscope, fluorescence generated from a sample is collected by an objective lens and relayed directly to a detector. While acceptable for imaging thin samples, thick samples become blurred by fluorescence generated above and below the objective focal plane. In contrast, confocal microscopy enables virtual, optical sectioning of samples, rejecting out-of-focus light to build high resolution three-dimensional representations of samples.
Confocal microscopes achieve this feat by using a confocal aperture in the detection beam path. The fluorescence collected from a sample by the objective is relayed back through the scanning mirrors and through the primary dichroic mirror, a mirror carefully selected to reflect shorter wavelengths such as the laser excitation beam while passing the longer, Stokes-shifted fluorescence emission. This long-wavelength fluorescence signal is then passed to a pair of lenses on either side of a pinhole that is positioned at a plane exactly conjugate with the focal plane of the objective lens. Photons collected from the focal volume of the object are collimated by the objective lens and are focused by the confocal lenses through the pinhole. Fluorescence generated above or below the focal plane will therefore not be collimated properly, and will not pass through the confocal pinhole1
, creating an optical section in which only light from the microscope focus is visible. (Fig 1
). Thus the pinhole effectively acts as a virtual aperture in the focal plane, confining the detected emission to only one limited spatial location.
Modern commercial confocal microscopes offer users fully automated operation, making formerly complex imaging procedures relatively straightforward and accessible. Despite the flexibility and power of these systems, commercial confocal microscopes are not well suited for all confocal imaging tasks, such as many in vivo
imaging applications. Without the ability to create customized imaging systems to meet their needs, important experiments can remain out of reach to many scientists.
In this article, we provide a step-by-step method for the complete construction of a custom, video-rate confocal imaging system from basic components. The upright microscope will be constructed using a resonant galvanometric mirror to provide the fast scanning axis, while a standard speed resonant galvanometric mirror will scan the slow axis. To create a precise scanned beam in the objective lens focus, these mirrors will be positioned at the so-called telecentric planes using four relay lenses. Confocal detection will be accomplished using a standard, off-the-shelf photomultiplier tube (PMT), and the images will be captured and displayed using a Matrox framegrabber card and the included software.
Bioengineering, Issue 56, Microscopy, confocal microscopy, microendoscopy, video-rate, fluorescence, scanning, in vivo imaging
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Localization and Relative Quantification of Carbon Nanotubes in Cells with Multispectral Imaging Flow Cytometry
Institutions: CNRS/Université Paris Diderot, CNRS/Université Paris Diderot, CNRS/Institut de Biologie Moléculaire et Cellulaire.
Carbon-based nanomaterials, like carbon nanotubes (CNTs), belong to this type of nanoparticles which are very difficult to discriminate from carbon-rich cell structures and de facto
there is still no quantitative method to assess their distribution at cell and tissue levels. What we propose here is an innovative method allowing the detection and quantification of CNTs in cells using a multispectral imaging flow cytometer (ImageStream, Amnis). This newly developed device integrates both a high-throughput of cells and high resolution imaging, providing thus images for each cell directly in flow and therefore statistically relevant image analysis. Each cell image is acquired on bright-field (BF), dark-field (DF), and fluorescent channels, giving access respectively to the level and the distribution of light absorption, light scattered and fluorescence for each cell. The analysis consists then in a pixel-by-pixel comparison of each image, of the 7,000-10,000 cells acquired for each condition of the experiment. Localization and quantification of CNTs is made possible thanks to some particular intrinsic properties of CNTs: strong light absorbance and scattering; indeed CNTs appear as strongly absorbed dark spots on BF and bright spots on DF with a precise colocalization.
This methodology could have a considerable impact on studies about interactions between nanomaterials and cells given that this protocol is applicable for a large range of nanomaterials, insofar as they are capable of absorbing (and/or scattering) strongly enough the light.
Bioengineering, Issue 82, bioengineering, imaging flow cytometry, Carbon Nanotubes, bio-nano-interactions, cellular uptake, cell trafficking
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e.
, lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.
) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization.
Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25
. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7
with a multiobjective evolutionary algorithm SPEA226
, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Minimal Erythema Dose (MED) Testing
Institutions: Fox Chase Cancer Center , University of Pennsylvania , Drexel University , Fox Chase Cancer Center , The Cancer Institute of New Jersey.
Ultraviolet radiation (UV) therapy is sometimes used as a treatment for various common skin conditions, including psoriasis, acne, and eczema. The dosage of UV light is prescribed according to an individual's skin sensitivity. Thus, to establish the proper dosage of UV light to administer to a patient, the patient is sometimes screened to determine a minimal erythema dose (MED), which is the amount of UV radiation that will produce minimal erythema (sunburn or redness caused by engorgement of capillaries) of an individual's skin within a few hours following exposure. This article describes how to conduct minimal erythema dose (MED) testing. There is currently no easy way to determine an appropriate UV dose for clinical or research purposes without conducting formal MED testing, requiring observation hours after testing, or informal trial and error testing with the risks of under- or over-dosing. However, some alternative methods are discussed.
Medicine, Issue 75, Anatomy, Physiology, Dermatology, Analytical, Diagnostic, Therapeutic Techniques, Equipment, Health Care, Minimal erythema dose (MED) testing, skin sensitivity, ultraviolet radiation, spectrophotometry, UV exposure, psoriasis, acne, eczema, clinical techniques
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
Growth Factor-Coated Bead Placement on Dorsal Forebrain Explants
Institutions: University of California, Irvine (UCI), University of California, Irvine (UCI), University of California, Irvine (UCI).
Developmental Biology, Issue 2, Growth Factor, Neuroscience, mouse, Affi-Gel Beads