JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Four decades of forest persistence, clearance and logging on Borneo.
PUBLISHED: 01-01-2014
The native forests of Borneo have been impacted by selective logging, fire, and conversion to plantations at unprecedented scales since industrial-scale extractive industries began in the early 1970s. There is no island-wide documentation of forest clearance or logging since the 1970s. This creates an information gap for conservation planning, especially with regard to selectively logged forests that maintain high conservation potential. Analysing LANDSAT images, we estimate that 75.7% (558,060 km2) of Borneo's area (737,188 km2) was forested around 1973. Based upon a forest cover map for 2010 derived using ALOS-PALSAR and visually reviewing LANDSAT images, we estimate that the 1973 forest area had declined by 168,493 km2 (30.2%) in 2010. The highest losses were recorded in Sabah and Kalimantan with 39.5% and 30.7% of their total forest area in 1973 becoming non-forest in 2010, and the lowest in Brunei and Sarawak (8.4%, and 23.1%). We estimate that the combined area planted in industrial oil palm and timber plantations in 2010 was 75,480 km2, representing 10% of Borneo. We mapped 271,819 km of primary logging roads that were created between 1973 and 2010. The greatest density of logging roads was found in Sarawak, at 0.89 km km-2, and the lowest density in Brunei, at 0.18 km km-2. Analyzing MODIS-based tree cover maps, we estimate that logging operated within 700 m of primary logging roads. Using this distance, we estimate that 266,257 km2 of 1973 forest cover has been logged. With 389,566 km2 (52.8%) of the island remaining forested, of which 209,649 km2 remains intact. There is still hope for biodiversity conservation in Borneo. Protecting logged forests from fire and conversion to plantations is an urgent priority for reducing rates of deforestation in Borneo.
Authors: Jennifer L Soong, Dan Reuss, Colin Pinney, Ty Boyack, Michelle L Haddix, Catherine E Stewart, M. Francesca Cotrufo.
Published: 01-16-2014
Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13C with 15N, 18O or 2H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4. From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e. leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7. The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13C and 15N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13C and 15N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13C and 6.7 atom%15N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13C and 0.56 atom%15N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13C-CO2 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling.
25 Related JoVE Articles!
Play Button
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Authors: Jennifer L. Koch, David W. Carey.
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga, which creates entry points for infection by one of the Neonectria species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
Play Button
Linking Predation Risk, Herbivore Physiological Stress and Microbial Decomposition of Plant Litter
Authors: Oswald J. Schmitz, Mark A. Bradford, Michael S. Strickland, Dror Hawlena.
Institutions: Yale University, Virginia Tech, The Hebrew University of Jerusalem.
The quantity and quality of detritus entering the soil determines the rate of decomposition by microbial communities as well as recycle rates of nitrogen (N) and carbon (C) sequestration1,2. Plant litter comprises the majority of detritus3, and so it is assumed that decomposition is only marginally influenced by biomass inputs from animals such as herbivores and carnivores4,5. However, carnivores may influence microbial decomposition of plant litter via a chain of interactions in which predation risk alters the physiology of their herbivore prey that in turn alters soil microbial functioning when the herbivore carcasses are decomposed6. A physiological stress response by herbivores to the risk of predation can change the C:N elemental composition of herbivore biomass7,8,9 because stress from predation risk increases herbivore basal energy demands that in nutrient-limited systems forces herbivores to shift their consumption from N-rich resources to support growth and reproduction to C-rich carbohydrate resources to support heightened metabolism6. Herbivores have limited ability to store excess nutrients, so stressed herbivores excrete N as they increase carbohydrate-C consumption7. Ultimately, prey stressed by predation risk increase their body C:N ratio7,10, making them poorer quality resources for the soil microbial pool likely due to lower availability of labile N for microbial enzyme production6. Thus, decomposition of carcasses of stressed herbivores has a priming effect on the functioning of microbial communities that decreases subsequent ability to of microbes to decompose plant litter6,10,11. We present the methodology to evaluate linkages between predation risk and litter decomposition by soil microbes. We describe how to: induce stress in herbivores from predation risk; measure those stress responses, and measure the consequences on microbial decomposition. We use insights from a model grassland ecosystem comprising the hunting spider predator (Pisuarina mira), a dominant grasshopper herbivore (Melanoplus femurrubrum),and a variety of grass and forb plants9.
Environmental Sciences, Issue 73, Microbiology, Plant Biology, Entomology, Organisms, Investigative Techniques, Biological Phenomena, Chemical Phenomena, Metabolic Phenomena, Microbiological Phenomena, Earth Resources and Remote Sensing, Life Sciences (General), Litter Decomposition, Ecological Stoichiometry, Physiological Stress and Ecosystem Function, Predation Risk, Soil Respiration, Carbon Sequestration, Soil Science, respiration, spider, grasshoper, model system
Play Button
Reduced Itraconazole Concentration and Durations Are Successful in Treating Batrachochytrium dendrobatidis Infection in Amphibians
Authors: Laura A. Brannelly.
Institutions: James Cook University.
Amphibians are experiencing the greatest decline of any vertebrate class and a leading cause of these declines is a fungal pathogen, Batrachochytrium dendrobatidis (Bd), which causes the disease chytridiomycosis. Captive assurance colonies are important worldwide for threatened amphibian species and may be the only lifeline for those in critical threat of extinction. Maintaining disease free colonies is a priority of captive managers, yet safe and effective treatments for all species and across life stages have not been identified. The most widely used chemotherapeutic treatment is itraconazole, although the dosage commonly used can be harmful to some individuals and species. We performed a clinical treatment trial to assess whether a lower and safer but effective dose of itraconazole could be found to cure Bd infections. We found that by reducing the treatment concentration from 0.01-0.0025% and reducing the treatment duration from 11-6 days of 5 min baths, frogs could be cured of Bd infection with fewer side effects and less treatment-associated mortality.
Immunology, Issue 85, Batrachochytrium dendrobatidis, itraconazole, chytridiomycosis, captive assurance colonies, amphibian conservation
Play Button
Fabrication, Densification, and Replica Molding of 3D Carbon Nanotube Microstructures
Authors: Davor Copic, Sei Jin Park, Sameh Tawfick, Michael De Volder, A. John Hart.
Institutions: University of Michigan , IMEC, Belgium.
The introduction of new materials and processes to microfabrication has, in large part, enabled many important advances in microsystems, lab-on-a-chip devices, and their applications. In particular, capabilities for cost-effective fabrication of polymer microstructures were transformed by the advent of soft lithography and other micromolding techniques 1, 2, and this led a revolution in applications of microfabrication to biomedical engineering and biology. Nevertheless, it remains challenging to fabricate microstructures with well-defined nanoscale surface textures, and to fabricate arbitrary 3D shapes at the micro-scale. Robustness of master molds and maintenance of shape integrity is especially important to achieve high fidelity replication of complex structures and preserving their nanoscale surface texture. The combination of hierarchical textures, and heterogeneous shapes, is a profound challenge to existing microfabrication methods that largely rely upon top-down etching using fixed mask templates. On the other hand, the bottom-up synthesis of nanostructures such as nanotubes and nanowires can offer new capabilities to microfabrication, in particular by taking advantage of the collective self-organization of nanostructures, and local control of their growth behavior with respect to microfabricated patterns. Our goal is to introduce vertically aligned carbon nanotubes (CNTs), which we refer to as CNT "forests", as a new microfabrication material. We present details of a suite of related methods recently developed by our group: fabrication of CNT forest microstructures by thermal CVD from lithographically patterned catalyst thin films; self-directed elastocapillary densification of CNT microstructures; and replica molding of polymer microstructures using CNT composite master molds. In particular, our work shows that self-directed capillary densification ("capillary forming"), which is performed by condensation of a solvent onto the substrate with CNT microstructures, significantly increases the packing density of CNTs. This process enables directed transformation of vertical CNT microstructures into straight, inclined, and twisted shapes, which have robust mechanical properties exceeding those of typical microfabrication polymers. This in turn enables formation of nanocomposite CNT master molds by capillary-driven infiltration of polymers. The replica structures exhibit the anisotropic nanoscale texture of the aligned CNTs, and can have walls with sub-micron thickness and aspect ratios exceeding 50:1. Integration of CNT microstructures in fabrication offers further opportunity to exploit the electrical and thermal properties of CNTs, and diverse capabilities for chemical and biochemical functionalization 3.
Mechanical Engineering, Issue 65, Physics, Carbon nanotube, microstructure, fabrication, molding, transfer, polymer
Play Button
Flexural Rigidity Measurements of Biopolymers Using Gliding Assays
Authors: Douglas S. Martin, Lu Yu, Brian L. Van Hoozen.
Institutions: Lawrence University.
Microtubules are cytoskeletal polymers which play a role in cell division, cell mechanics, and intracellular transport. Each of these functions requires microtubules that are stiff and straight enough to span a significant fraction of the cell diameter. As a result, the microtubule persistence length, a measure of stiffness, has been actively studied for the past two decades1. Nonetheless, open questions remain: short microtubules are 10-50 times less stiff than long microtubules2-4, and even long microtubules have measured persistence lengths which vary by an order of magnitude5-9. Here, we present a method to measure microtubule persistence length. The method is based on a kinesin-driven microtubule gliding assay10. By combining sparse fluorescent labeling of individual microtubules with single particle tracking of individual fluorophores attached to the microtubule, the gliding trajectories of single microtubules are tracked with nanometer-level precision. The persistence length of the trajectories is the same as the persistence length of the microtubule under the conditions used11. An automated tracking routine is used to create microtubule trajectories from fluorophores attached to individual microtubules, and the persistence length of this trajectory is calculated using routines written in IDL. This technique is rapidly implementable, and capable of measuring the persistence length of 100 microtubules in one day of experimentation. The method can be extended to measure persistence length under a variety of conditions, including persistence length as a function of length along microtubules. Moreover, the analysis routines used can be extended to myosin-based acting gliding assays, to measure the persistence length of actin filaments as well.
Biophysics, Issue 69, Bioengineering, Physics, Molecular Biology, Cellular Biology, microtubule, persistence length, flexural rigidity, gliding assay, mechanics, cytoskeleton, actin
Play Button
Simultaneous Scalp Electroencephalography (EEG), Electromyography (EMG), and Whole-body Segmental Inertial Recording for Multi-modal Neural Decoding
Authors: Thomas C. Bulea, Atilla Kilicarslan, Recep Ozdemir, William H. Paloski, Jose L. Contreras-Vidal.
Institutions: National Institutes of Health, University of Houston, University of Houston, University of Houston, University of Houston.
Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Biomedical Engineering, Molecular Biology, Electroencephalography, EEG, Electromyography, EMG, electroencephalograph, gait, brain-computer interface, brain machine interface, neural decoding, over-ground walking, robotic gait, brain, imaging, clinical techniques
Play Button
Deriving the Time Course of Glutamate Clearance with a Deconvolution Analysis of Astrocytic Transporter Currents
Authors: Annalisa Scimemi, Jeffrey S. Diamond.
Institutions: National Institutes of Health.
The highest density of glutamate transporters in the brain is found in astrocytes. Glutamate transporters couple the movement of glutamate across the membrane with the co-transport of 3 Na+ and 1 H+ and the counter-transport of 1 K+. The stoichiometric current generated by the transport process can be monitored with whole-cell patch-clamp recordings from astrocytes. The time course of the recorded current is shaped by the time course of the glutamate concentration profile to which astrocytes are exposed, the kinetics of glutamate transporters, and the passive electrotonic properties of astrocytic membranes. Here we describe the experimental and analytical methods that can be used to record glutamate transporter currents in astrocytes and isolate the time course of glutamate clearance from all other factors that shape the waveform of astrocytic transporter currents. The methods described here can be used to estimate the lifetime of flash-uncaged and synaptically-released glutamate at astrocytic membranes in any region of the central nervous system during health and disease.
Neurobiology, Issue 78, Neuroscience, Biochemistry, Molecular Biology, Cellular Biology, Anatomy, Physiology, Biophysics, Astrocytes, Synapses, Glutamic Acid, Membrane Transport Proteins, Astrocytes, glutamate transporters, uptake, clearance, hippocampus, stratum radiatum, CA1, gene, brain, slice, animal model
Play Button
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Authors: Colin W. Bell, Barbara E. Fricks, Jennifer D. Rocca, Jessica M. Steinweg, Shawna K. McMahon, Matthew D. Wallenstein.
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e. C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample). Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e. colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Play Button
Optimized Staining and Proliferation Modeling Methods for Cell Division Monitoring using Cell Tracking Dyes
Authors: Joseph D. Tario Jr., Kristen Humphrey, Andrew D. Bantly, Katharine A. Muirhead, Jonni S. Moore, Paul K. Wallace.
Institutions: Roswell Park Cancer Institute, University of Pennsylvania , SciGro, Inc., University of Pennsylvania .
Fluorescent cell tracking dyes, in combination with flow and image cytometry, are powerful tools with which to study the interactions and fates of different cell types in vitro and in vivo.1-5 Although there are literally thousands of publications using such dyes, some of the most commonly encountered cell tracking applications include monitoring of: stem and progenitor cell quiescence, proliferation and/or differentiation6-8 antigen-driven membrane transfer9 and/or precursor cell proliferation3,4,10-18 and immune regulatory and effector cell function1,18-21. Commercially available cell tracking dyes vary widely in their chemistries and fluorescence properties but the great majority fall into one of two classes based on their mechanism of cell labeling. "Membrane dyes", typified by PKH26, are highly lipophilic dyes that partition stably but non-covalently into cell membranes1,2,11. "Protein dyes", typified by CFSE, are amino-reactive dyes that form stable covalent bonds with cell proteins4,16,18. Each class has its own advantages and limitations. The key to their successful use, particularly in multicolor studies where multiple dyes are used to track different cell types, is therefore to understand the critical issues enabling optimal use of each class2-4,16,18,24. The protocols included here highlight three common causes of poor or variable results when using cell-tracking dyes. These are: Failure to achieve bright, uniform, reproducible labeling. This is a necessary starting point for any cell tracking study but requires attention to different variables when using membrane dyes than when using protein dyes or equilibrium binding reagents such as antibodies. Suboptimal fluorochrome combinations and/or failure to include critical compensation controls. Tracking dye fluorescence is typically 102 - 103 times brighter than antibody fluorescence. It is therefore essential to verify that the presence of tracking dye does not compromise the ability to detect other probes being used. Failure to obtain a good fit with peak modeling software. Such software allows quantitative comparison of proliferative responses across different populations or stimuli based on precursor frequency or other metrics. Obtaining a good fit, however, requires exclusion of dead/dying cells that can distort dye dilution profiles and matching of the assumptions underlying the model with characteristics of the observed dye dilution profile. Examples given here illustrate how these variables can affect results when using membrane and/or protein dyes to monitor cell proliferation.
Cellular Biology, Issue 70, Molecular Biology, Cell tracking, PKH26, CFSE, membrane dyes, dye dilution, proliferation modeling, lymphocytes
Play Button
In vitro Mesothelial Clearance Assay that Models the Early Steps of Ovarian Cancer Metastasis
Authors: Rachel A. Davidowitz, Marcin P. Iwanicki, Joan S. Brugge.
Institutions: Harvard Medical School.
Ovarian cancer is the fifth leading cause of cancer related deaths in the United States1. Despite a positive initial response to therapies, 70 to 90 percent of women with ovarian cancer develop new metastases, and the recurrence is often fatal2. It is, therefore, necessary to understand how secondary metastases arise in order to develop better treatments for intermediate and late stage ovarian cancer. Ovarian cancer metastasis occurs when malignant cells detach from the primary tumor site and disseminate throughout the peritoneal cavity. The disseminated cells can form multicellular clusters, or spheroids, that will either remain unattached, or implant onto organs within the peritoneal cavity3 (Figure 1, Movie 1). All of the organs within the peritoneal cavity are lined with a single, continuous, layer of mesothelial cells4-6 (Figure 2). However, mesothelial cells are absent from underneath peritoneal tumor masses, as revealed by electron micrograph studies of excised human tumor tissue sections3,5-7 (Figure 2). This suggests that mesothelial cells are excluded from underneath the tumor mass by an unknown process. Previous in vitro experiments demonstrated that primary ovarian cancer cells attach more efficiently to extracellular matrix than to mesothelial cells8, and more recent studies showed that primary peritoneal mesothelial cells actually provide a barrier to ovarian cancer cell adhesion and invasion (as compared to adhesion and invasion on substrates that were not covered with mesothelial cells)9,10. This would suggest that mesothelial cells act as a barrier against ovarian cancer metastasis. The cellular and molecular mechanisms by which ovarian cancer cells breach this barrier, and exclude the mesothelium have, until recently, remained unknown. Here we describe the methodology for an in vitro assay that models the interaction between ovarian cancer cell spheroids and mesothelial cells in vivo (Figure 3, Movie 2). Our protocol was adapted from previously described methods for analyzing ovarian tumor cell interactions with mesothelial monolayers8-16, and was first described in a report showing that ovarian tumor cells utilize an integrin –dependent activation of myosin and traction force to promote the exclusion of the mesothelial cells from under a tumor spheroid17. This model takes advantage of time-lapse fluorescence microscopy to monitor the two cell populations in real time, providing spatial and temporal information on the interaction. The ovarian cancer cells express red fluorescent protein (RFP) while the mesothelial cells express green fluorescent protein (GFP). RFP-expressing ovarian cancer cell spheroids attach to the GFP-expressing mesothelial monolayer. The spheroids spread, invade, and force the mesothelial cells aside creating a hole in the monolayer. This hole is visualized as the negative space (black) in the GFP image. The area of the hole can then be measured to quantitatively analyze differences in clearance activity between control and experimental populations of ovarian cancer and/ or mesothelial cells. This assay requires only a small number of ovarian cancer cells (100 cells per spheroid X 20-30 spheroids per condition), so it is feasible to perform this assay using precious primary tumor cell samples. Furthermore, this assay can be easily adapted for high throughput screening.
Medicine, Issue 60, Ovarian Cancer, Metastasis, In vitro Model, Mesothelial, Spheroid
Play Button
Extraction of High Molecular Weight Genomic DNA from Soils and Sediments
Authors: Sangwon Lee, Steven J. Hallam.
Institutions: University of British Columbia - UBC.
The soil microbiome is a vast and relatively unexplored reservoir of genomic diversity and metabolic innovation that is intimately associated with nutrient and energy flow within terrestrial ecosystems. Cultivation-independent environmental genomic, also known as metagenomic, approaches promise unprecedented access to this genetic information with respect to pathway reconstruction and functional screening for high value therapeutic and biomass conversion processes. However, the soil microbiome still remains a challenge largely due to the difficulty in obtaining high molecular weight DNA of sufficient quality for large insert library production. Here we introduce a protocol for extracting high molecular weight, microbial community genomic DNA from soils and sediments. The quality of isolated genomic DNA is ideal for constructing large insert environmental genomic libraries for downstream sequencing and screening applications. The procedure starts with cell lysis. Cell walls and membranes of microbes are lysed by both mechanical (grinding) and chemical forces (β-mercaptoethanol). Genomic DNA is then isolated using extraction buffer, chloroform-isoamyl alcohol and isopropyl alcohol. The buffers employed for the lysis and extraction steps include guanidine isothiocyanate and hexadecyltrimethylammonium bromide (CTAB) to preserve the integrity of the high molecular weight genomic DNA. Depending on your downstream application, the isolated genomic DNA can be further purified using cesium chloride (CsCl) gradient ultracentrifugation, which reduces impurities including humic acids. The first procedure, extraction, takes approximately 8 hours, excluding DNA quantification step. The CsCl gradient ultracentrifugation, is a two days process. During the entire procedure, genomic DNA should be treated gently to prevent shearing, avoid severe vortexing, and repetitive harsh pipetting.
Microbiology, Issue 33, Environmental DNA, high molecular weight genomic DNA, DNA extraction, soil, sediments
Play Button
Fabrication of Nano-engineered Transparent Conducting Oxides by Pulsed Laser Deposition
Authors: Paolo Gondoni, Matteo Ghidelli, Fabio Di Fonzo, Andrea Li Bassi, Carlo S. Casari.
Institutions: Politecnico di Milano, Instituto Italiano di Tecnologia.
Nanosecond Pulsed Laser Deposition (PLD) in the presence of a background gas allows the deposition of metal oxides with tunable morphology, structure, density and stoichiometry by a proper control of the plasma plume expansion dynamics. Such versatility can be exploited to produce nanostructured films from compact and dense to nanoporous characterized by a hierarchical assembly of nano-sized clusters. In particular we describe the detailed methodology to fabricate two types of Al-doped ZnO (AZO) films as transparent electrodes in photovoltaic devices: 1) at low O2 pressure, compact films with electrical conductivity and optical transparency close to the state of the art transparent conducting oxides (TCO) can be deposited at room temperature, to be compatible with thermally sensitive materials such as polymers used in organic photovoltaics (OPVs); 2) highly light scattering hierarchical structures resembling a forest of nano-trees are produced at higher pressures. Such structures show high Haze factor (>80%) and may be exploited to enhance the light trapping capability. The method here described for AZO films can be applied to other metal oxides relevant for technological applications such as TiO2, Al2O3, WO3 and Ag4O4.
Materials Science, Issue 72, Physics, Nanotechnology, Nanoengineering, Oxides, thin films, thin film theory, deposition and growth, Pulsed laser Deposition (PLD), Transparent conducting oxides (TCO), Hierarchically organized Nanostructured oxides, Al doped ZnO (AZO) films, enhanced light scattering capability, gases, deposition, nanoporus, nanoparticles, Van der Pauw, scanning electron microscopy, SEM
Play Button
Label-free in situ Imaging of Lignification in Plant Cell Walls
Authors: Martin Schmidt, Pradeep Perera, Adam M. Schwartzberg, Paul D. Adams, P. James Schuck.
Institutions: University of California, Berkeley, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Meeting growing energy demands safely and efficiently is a pressing global challenge. Therefore, research into biofuels production that seeks to find cost-effective and sustainable solutions has become a topical and critical task. Lignocellulosic biomass is poised to become the primary source of biomass for the conversion to liquid biofuels1-6. However, the recalcitrance of these plant cell wall materials to cost-effective and efficient degradation presents a major impediment for their use in the production of biofuels and chemicals4. In particular, lignin, a complex and irregular poly-phenylpropanoid heteropolymer, becomes problematic to the postharvest deconstruction of lignocellulosic biomass. For example in biomass conversion for biofuels, it inhibits saccharification in processes aimed at producing simple sugars for fermentation7. The effective use of plant biomass for industrial purposes is in fact largely dependent on the extent to which the plant cell wall is lignified. The removal of lignin is a costly and limiting factor8 and lignin has therefore become a key plant breeding and genetic engineering target in order to improve cell wall conversion. Analytical tools that permit the accurate rapid characterization of lignification of plant cell walls become increasingly important for evaluating a large number of breeding populations. Extractive procedures for the isolation of native components such as lignin are inevitably destructive, bringing about significant chemical and structural modifications9-11. Analytical chemical in situ methods are thus invaluable tools for the compositional and structural characterization of lignocellulosic materials. Raman microscopy is a technique that relies on inelastic or Raman scattering of monochromatic light, like that from a laser, where the shift in energy of the laser photons is related to molecular vibrations and presents an intrinsic label-free molecular "fingerprint" of the sample. Raman microscopy can afford non-destructive and comparatively inexpensive measurements with minimal sample preparation, giving insights into chemical composition and molecular structure in a close to native state. Chemical imaging by confocal Raman microscopy has been previously used for the visualization of the spatial distribution of cellulose and lignin in wood cell walls12-14. Based on these earlier results, we have recently adopted this method to compare lignification in wild type and lignin-deficient transgenic Populus trichocarpa (black cottonwood) stem wood15. Analyzing the lignin Raman bands16,17 in the spectral region between 1,600 and 1,700 cm-1, lignin signal intensity and localization were mapped in situ. Our approach visualized differences in lignin content, localization, and chemical composition. Most recently, we demonstrated Raman imaging of cell wall polymers in Arabidopsis thaliana with lateral resolution that is sub-μm18. Here, this method is presented affording visualization of lignin in plant cell walls and comparison of lignification in different tissues, samples or species without staining or labeling of the tissues.
Plant Biology, Issue 45, Raman microscopy, lignin, poplar wood, Arabidopsis thaliana
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Authors: Lisa M. Weatherly, Rachel H. Kennedy, Juyoung Shim, Julie A. Gosse.
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1. Originally published by Naal et al.1, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
Play Button
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Authors: Andreas Florian Haas, Ben Knowles, Yan Wei Lim, Tracey McDole Somera, Linda Wegley Kelly, Mark Hatay, Forest Rohwer.
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Genome-wide Gene Deletions in Streptococcus sanguinis by High Throughput PCR
Authors: Xiuchun Ge, Ping Xu.
Institutions: Virginia Commonwealth University.
Transposon mutagenesis and single-gene deletion are two methods applied in genome-wide gene knockout in bacteria 1,2. Although transposon mutagenesis is less time consuming, less costly, and does not require completed genome information, there are two weaknesses in this method: (1) the possibility of a disparate mutants in the mixed mutant library that counter-selects mutants with decreased competition; and (2) the possibility of partial gene inactivation whereby genes do not entirely lose their function following the insertion of a transposon. Single-gene deletion analysis may compensate for the drawbacks associated with transposon mutagenesis. To improve the efficiency of genome-wide single gene deletion, we attempt to establish a high-throughput technique for genome-wide single gene deletion using Streptococcus sanguinis as a model organism. Each gene deletion construct in S. sanguinis genome is designed to comprise 1-kb upstream of the targeted gene, the aphA-3 gene, encoding kanamycin resistance protein, and 1-kb downstream of the targeted gene. Three sets of primers F1/R1, F2/R2, and F3/R3, respectively, are designed and synthesized in a 96-well plate format for PCR-amplifications of those three components of each deletion construct. Primers R1 and F3 contain 25-bp sequences that are complementary to regions of the aphA-3 gene at their 5' end. A large scale PCR amplification of the aphA-3 gene is performed once for creating all single-gene deletion constructs. The promoter of aphA-3 gene is initially excluded to minimize the potential polar effect of kanamycin cassette. To create the gene deletion constructs, high-throughput PCR amplification and purification are performed in a 96-well plate format. A linear recombinant PCR amplicon for each gene deletion will be made up through four PCR reactions using high-fidelity DNA polymerase. The initial exponential growth phase of S. sanguinis cultured in Todd Hewitt broth supplemented with 2.5% inactivated horse serum is used to increase competence for the transformation of PCR-recombinant constructs. Under this condition, up to 20% of S. sanguinis cells can be transformed using ~50 ng of DNA. Based on this approach, 2,048 mutants with single-gene deletion were ultimately obtained from the 2,270 genes in S. sanguinis excluding four gene ORFs contained entirely within other ORFs in S. sanguinis SK36 and 218 potential essential genes. The technique on creating gene deletion constructs is high throughput and could be easy to use in genome-wide single gene deletions for any transformable bacteria.
Genetics, Issue 69, Microbiology, Molecular Biology, Biomedical Engineering, Genomics, Streptococcus sanguinis, Streptococcus, Genome-wide gene deletions, genes, High-throughput, PCR
Play Button
In Vitro Assay to Evaluate the Impact of Immunoregulatory Pathways on HIV-specific CD4 T Cell Effector Function
Authors: Filippos Porichis, Meghan G. Hart, Jennifer Zupkosky, Lucie Barblu, Daniel E. Kaufmann.
Institutions: The Ragon Institute of MGH, MIT and Harvard, Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM).
T cell exhaustion is a major factor in failed pathogen clearance during chronic viral infections. Immunoregulatory pathways, such as PD-1 and IL-10, are upregulated upon this ongoing antigen exposure and contribute to loss of proliferation, reduced cytolytic function, and impaired cytokine production by CD4 and CD8 T cells. In the murine model of LCMV infection, administration of blocking antibodies against these two pathways augmented T cell responses. However, there is currently no in vitro assay to measure the impact of such blockade on cytokine secretion in cells from human samples. Our protocol and experimental approach enable us to accurately and efficiently quantify the restoration of cytokine production by HIV-specific CD4 T cells from HIV infected subjects. Here, we depict an in vitro experimental design that enables measurements of cytokine secretion by HIV-specific CD4 T cells and their impact on other cell subsets. CD8 T cells were depleted from whole blood and remaining PBMCs were isolated via Ficoll separation method. CD8-depleted PBMCs were then incubated with blocking antibodies against PD-L1 and/or IL-10Rα and, after stimulation with an HIV-1 Gag peptide pool, cells were incubated at 37 °C, 5% CO2. After 48 hr, supernatant was collected for cytokine analysis by beads arrays and cell pellets were collected for either phenotypic analysis using flow cytometry or transcriptional analysis using qRT-PCR. For more detailed analysis, different cell populations were obtained by selective subset depletion from PBMCs or by sorting using flow cytometry before being assessed in the same assays. These methods provide a highly sensitive and specific approach to determine the modulation of cytokine production by antigen-specific T-helper cells and to determine functional interactions between different populations of immune cells.
Immunology, Issue 80, Virus Diseases, Immune System Diseases, HIV, CD4 T cell, CD8 T cell, antigen-presenting cell, Cytokines, immunoregulatory networks, PD-1: IL-10, exhaustion, monocytes
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.