Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
24 Related JoVE Articles!
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis
and M. faveolata
. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae
endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis
and M. faveolata
contain similar types of chlorophyll and chromatophores. However, M. annularis
and M. faveolata
exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Isolation of Cellular Lipid Droplets: Two Purification Techniques Starting from Yeast Cells and Human Placentas
Institutions: University of Tennessee, University of Tennessee.
Lipid droplets are dynamic organelles that can be found in most eukaryotic and certain prokaryotic cells. Structurally, the droplets consist of a core of neutral lipids surrounded by a phospholipid monolayer. One of the most useful techniques in determining the cellular roles of droplets has been proteomic identification of bound proteins, which can be isolated along with the droplets. Here, two methods are described to isolate lipid droplets and their bound proteins from two wide-ranging eukaryotes: fission yeast and human placental villous cells. Although both techniques have differences, the main method - density gradient centrifugation - is shared by both preparations. This shows the wide applicability of the presented droplet isolation techniques.
In the first protocol, yeast cells are converted into spheroplasts by enzymatic digestion of their cell walls. The resulting spheroplasts are then gently lysed in a loose-fitting homogenizer. Ficoll is added to the lysate to provide a density gradient, and the mixture is centrifuged three times. After the first spin, the lipid droplets are localized to the white-colored floating layer of the centrifuge tubes along with the endoplasmic reticulum (ER), the plasma membrane, and vacuoles. Two subsequent spins are used to remove these other three organelles. The result is a layer that has only droplets and bound proteins.
In the second protocol, placental villous cells are isolated from human term placentas by enzymatic digestion with trypsin and DNase I. The cells are homogenized in a loose-fitting homogenizer. Low-speed and medium-speed centrifugation steps are used to remove unbroken cells, cellular debris, nuclei, and mitochondria. Sucrose is added to the homogenate to provide a density gradient and the mixture is centrifuged to separate the lipid droplets from the other cellular fractions.
The purity of the lipid droplets in both protocols is confirmed by Western Blot analysis. The droplet fractions from both preps are suitable for subsequent proteomic and lipidomic analysis.
Bioengineering, Issue 86, Lipid droplet, lipid body, fat body, oil body, Yeast, placenta, placental villous cells, isolation, purification, density gradient centrifugation
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
The Fastest Western in Town: A Contemporary Twist on the Classic Western Blot Analysis
Institutions: University of California, San Francisco.
The Western blot techniques that were originally established in the late 1970s are still actively utilized today. However, this traditional method of Western blotting has several drawbacks that include low quality resolution, spurious bands, decreased sensitivity, and poor protein integrity. Recent advances have drastically improved numerous aspects of the standard Western blot protocol to produce higher qualitative and quantitative data. The Bis-Tris gel system, an alternative to the conventional Laemmli system, generates better protein separation and resolution, maintains protein integrity, and reduces electrophoresis to a 35 min run time. Moreover, the iBlot dry blotting system, dramatically improves the efficacy and speed of protein transfer to the membrane in 7 min, which is in contrast to the traditional protein transfer methods that are often more inefficient with lengthy transfer times. In combination with these highly innovative modifications, protein detection using infrared fluorescent imaging results in higher-quality, more accurate and consistent data compared to the standard Western blotting technique of chemiluminescence. This technology can simultaneously detect two different antigens on the same membrane by utilizing two-color near-infrared dyes that are visualized in different fluorescent channels. Furthermore, the linearity and broad dynamic range of fluorescent imaging allows for the precise quantification of both strong and weak protein bands. Thus, this protocol describes the key improvements to the classic Western blotting method, in which these advancements significantly increase the quality of data while greatly reducing the performance time of this experiment.
Basic Protocol, Issue 84, Western blot, Bis-Tris, electrophoresis, dry blotting, protein transfer, infrared, Fluorescence, quantification, Antibody, Protein
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro
model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2
on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3
cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro
BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ
measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration.
Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release.
The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Generation and Purification of Human INO80 Chromatin Remodeling Complexes and Subcomplexes
Institutions: Stowers Institute for Medical Research, Kansas University Medical Center.
INO80 chromatin remodeling complexes regulate nucleosome dynamics and DNA accessibility by catalyzing ATP-dependent nucleosome remodeling. Human INO80 complexes consist of 14 protein subunits including Ino80, a SNF2-like ATPase, which serves both as the catalytic subunit and the scaffold for assembly of the complexes. Functions of the other subunits and the mechanisms by which they contribute to the INO80 complex's chromatin remodeling activity remain poorly understood, in part due to the challenge of generating INO80 subassemblies in human cells or heterologous expression systems. This JOVE protocol describes a procedure that allows purification of human INO80 chromatin remodeling subcomplexes that are lacking a subunit or a subset of subunits. N-terminally FLAG epitope tagged Ino80 cDNA are stably introduced into human embryonic kidney (HEK) 293 cell lines using Flp-mediated recombination. In the event that a subset of subunits of the INO80 complex is to be deleted, one expresses instead mutant Ino80 proteins that lack the platform needed for assembly of those subunits. In the event an individual subunit is to be depleted, one transfects siRNAs targeting this subunit into an HEK 293 cell line stably expressing FLAG tagged Ino80 ATPase. Nuclear extracts are prepared, and FLAG immunoprecipitation is performed to enrich protein fractions containing Ino80 derivatives. The compositions of purified INO80 subcomplexes can then be analyzed using methods such as immunoblotting, silver staining, and mass spectrometry. The INO80 and INO80 subcomplexes generated according to this protocol can be further analyzed using various biochemical assays, which are described in the accompanying JOVE protocol. The methods described here can be adapted for studies of the structural and functional properties of any mammalian multi-subunit chromatin remodeling and modifying complexes.
Biochemistry, Issue 92, chromatin remodeling, INO80, SNF2 family ATPase, structure-function, enzyme purification
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Long-term Behavioral Tracking of Freely Swimming Weakly Electric Fish
Institutions: University of Ottawa, University of Ottawa, University of Ottawa.
Long-term behavioral tracking can capture and quantify natural animal behaviors, including those occurring infrequently. Behaviors such as exploration and social interactions can be best studied by observing unrestrained, freely behaving animals. Weakly electric fish (WEF) display readily observable exploratory and social behaviors by emitting electric organ discharge (EOD). Here, we describe three effective techniques to synchronously measure the EOD, body position, and posture of a free-swimming WEF for an extended period of time. First, we describe the construction of an experimental tank inside of an isolation chamber designed to block external sources of sensory stimuli such as light, sound, and vibration. The aquarium was partitioned to accommodate four test specimens, and automated gates remotely control the animals' access to the central arena. Second, we describe a precise and reliable real-time EOD timing measurement method from freely swimming WEF. Signal distortions caused by the animal's body movements are corrected by spatial averaging and temporal processing stages. Third, we describe an underwater near-infrared imaging setup to observe unperturbed nocturnal animal behaviors. Infrared light pulses were used to synchronize the timing between the video and the physiological signal over a long recording duration. Our automated tracking software measures the animal's body position and posture reliably in an aquatic scene. In combination, these techniques enable long term observation of spontaneous behavior of freely swimming weakly electric fish in a reliable and precise manner. We believe our method can be similarly applied to the study of other aquatic animals by relating their physiological signals with exploratory or social behaviors.
Neuroscience, Issue 85, animal tracking, weakly electric fish, electric organ discharge, underwater infrared imaging, automated image tracking, sensory isolation chamber, exploratory behavior
The Cell-based L-Glutathione Protection Assays to Study Endocytosis and Recycling of Plasma Membrane Proteins
Institutions: Children's Hospital of Pittsburgh of UPMC, University of Pittsburgh School of Medicine.
Membrane trafficking involves transport of proteins from the plasma membrane to the cell interior (i.e.
endocytosis) followed by trafficking to lysosomes for degradation or to the plasma membrane for recycling. The cell based L-glutathione protection assays can be used to study endocytosis and recycling of protein receptors, channels, transporters, and adhesion molecules localized at the cell surface. The endocytic assay requires labeling of cell surface proteins with a cell membrane impermeable biotin containing a disulfide bond and the N-hydroxysuccinimide (NHS) ester at 4 ºC - a temperature at which membrane trafficking does not occur. Endocytosis of biotinylated plasma membrane proteins is induced by incubation at 37 ºC. Next, the temperature is decreased again to 4 ºC to stop endocytic trafficking and the disulfide bond in biotin covalently attached to proteins that have remained at the plasma membrane is reduced with L-glutathione. At this point, only proteins that were endocytosed remain protected from L-glutathione and thus remain biotinylated. After cell lysis, biotinylated proteins are isolated with streptavidin agarose, eluted from agarose, and the biotinylated protein of interest is detected by western blotting. During the recycling assay, after biotinylation cells are incubated at 37 °C to load endocytic vesicles with biotinylated proteins and the disulfide bond in biotin covalently attached to proteins remaining at the plasma membrane is reduced with L-glutathione at 4 ºC as in the endocytic assay. Next, cells are incubated again at 37 °C to allow biotinylated proteins from endocytic vesicles to recycle to the plasma membrane. Cells are then incubated at 4 ºC, and the disulfide bond in biotin attached to proteins that recycled to the plasma membranes is reduced with L-glutathione. The biotinylated proteins protected from L-glutathione are those that did not recycle to the plasma membrane.
Basic Protocol, Issue 82, Endocytosis, recycling, plasma membrane, cell surface, EZLink, Sulfo-NHS-SS-Biotin, L-Glutathione, GSH, thiol group, disulfide bond, epithelial cells, cell polarization
Pre-clinical Evaluation of Tyrosine Kinase Inhibitors for Treatment of Acute Leukemia
Institutions: University of Colorado Anschutz Medical Campus, University Hospital of Essen.
Receptor tyrosine kinases have been implicated in the development and progression of many cancers, including both leukemia and solid tumors, and are attractive druggable therapeutic targets. Here we describe an efficient four-step strategy for pre-clinical evaluation of tyrosine kinase inhibitors (TKIs) in the treatment of acute leukemia. Initially, western blot analysis is used to confirm target inhibition in cultured leukemia cells. Functional activity is then evaluated using clonogenic assays in methylcellulose or soft agar cultures. Experimental compounds that demonstrate activity in cell culture assays are evaluated in vivo
using NOD-SCID-gamma (NSG) mice transplanted orthotopically with human leukemia cell lines. Initial in vivo
pharmacodynamic studies evaluate target inhibition in leukemic blasts isolated from the bone marrow. This approach is used to determine the dose and schedule of administration required for effective target inhibition. Subsequent studies evaluate the efficacy of the TKIs in vivo
using luciferase expressing leukemia cells, thereby allowing for non-invasive bioluminescent monitoring of leukemia burden and assessment of therapeutic response using an in vivo
bioluminescence imaging system. This strategy has been effective for evaluation of TKIs in vitro
and in vivo
and can be applied for identification of molecularly-targeted agents with therapeutic potential or for direct comparison and prioritization of multiple compounds.
Medicine, Issue 79, Leukemia, Receptor Protein-Tyrosine Kinases, Molecular Targeted Therapy, Therapeutics, novel small molecule inhibitor, receptor tyrosine kinase, leukemia
Estimating Virus Production Rates in Aquatic Systems
Institutions: University of Tennessee.
Viruses are pervasive components of marine and freshwater systems, and are known to be significant agents of microbial mortality. Developing quantitative estimates of this process is critical as we can then develop better models of microbial community structure and function as well as advance our understanding of how viruses work to alter aquatic biogeochemical cycles. The virus reduction technique allows researchers to estimate the rate at which virus particles are released from the endemic microbial community. In brief, the abundance of free (extracellular) viruses is reduced in a sample while the microbial community is maintained at near ambient concentration. The microbial community is then incubated in the absence of free viruses and the rate at which viruses reoccur in the sample (through the lysis of already infected members of the community) can be quantified by epifluorescence microscopy or, in the case of specific viruses, quantitative PCR. These rates can then be used to estimate the rate of microbial mortality due to virus-mediated cell lysis.
Infectious Diseases, Issue 43, Viruses, seawater, lakes, viral lysis, marine microbiology, freshwater microbiology, epifluorescence microscopy
Linking Predation Risk, Herbivore Physiological Stress and Microbial Decomposition of Plant Litter
Institutions: Yale University, Virginia Tech, The Hebrew University of Jerusalem.
The quantity and quality of detritus entering the soil determines the rate of decomposition by microbial communities as well as recycle rates of nitrogen (N) and carbon (C) sequestration1,2
. Plant litter comprises the majority of detritus3
, and so it is assumed that decomposition is only marginally influenced by biomass inputs from animals such as herbivores and carnivores4,5
. However, carnivores may influence microbial decomposition of plant litter via a chain of interactions in which predation risk alters the physiology of their herbivore prey that in turn alters soil microbial functioning when the herbivore carcasses are decomposed6
. A physiological stress response by herbivores to the risk of predation can change the C:N elemental composition of herbivore biomass7,8,9
because stress from predation risk increases herbivore basal energy demands that in nutrient-limited systems forces herbivores to shift their consumption from N-rich resources to support growth and reproduction to C-rich carbohydrate resources to support heightened metabolism6
. Herbivores have limited ability to store excess nutrients, so stressed herbivores excrete N as they increase carbohydrate-C consumption7
. Ultimately, prey stressed by predation risk increase their body C:N ratio7,10
, making them poorer quality resources for the soil microbial pool likely due to lower availability of labile N for microbial enzyme production6
. Thus, decomposition of carcasses of stressed herbivores has a priming effect on the functioning of microbial communities that decreases subsequent ability to of microbes to decompose plant litter6,10,11
We present the methodology to evaluate linkages between predation risk and litter decomposition by soil microbes. We describe how to: induce stress in herbivores from predation risk; measure those stress responses, and measure the consequences on microbial decomposition. We use insights from a model grassland ecosystem comprising the hunting spider predator (Pisuarina mira
), a dominant grasshopper herbivore (Melanoplus femurrubrum
),and a variety of grass and forb plants9
Environmental Sciences, Issue 73, Microbiology, Plant Biology, Entomology, Organisms, Investigative Techniques, Biological Phenomena, Chemical Phenomena, Metabolic Phenomena, Microbiological Phenomena, Earth Resources and Remote Sensing, Life Sciences (General), Litter Decomposition, Ecological Stoichiometry, Physiological Stress and Ecosystem Function, Predation Risk, Soil Respiration, Carbon Sequestration, Soil Science, respiration, spider, grasshoper, model system
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+
-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+
-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+
transport by Na+
- or gastric H+
-ATPase in single cells. Using Xenopus
oocytes as expression system, we determine the amount of Rb+
) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+
uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g.
to quantitatively determine the 3Na+
transport stoichiometry of the Na+
-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+
-ATPase. In principle, the assay is not limited to K+
-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Metabolic Labeling of Leucine Rich Repeat Kinases 1 and 2 with Radioactive Phosphate
Institutions: KU Leuven and Leuven Institute for Neuroscience and Disease (LIND).
Leucine rich repeat kinases 1 and 2 (LRRK1 and LRRK2) are paralogs which share a similar domain organization, including a serine-threonine kinase domain, a Ras of complex proteins domain (ROC), a C-terminal of ROC domain (COR), and leucine-rich and ankyrin-like repeats at the N-terminus. The precise cellular roles of LRRK1 and LRRK2 have yet to be elucidated, however LRRK1 has been implicated in tyrosine kinase receptor signaling1,2
, while LRRK2 is implicated in the pathogenesis of Parkinson's disease3,4
. In this report, we present a protocol to label the LRRK1 and LRRK2 proteins in cells with 32
P orthophosphate, thereby providing a means to measure the overall phosphorylation levels of these 2 proteins in cells. In brief, affinity tagged LRRK proteins are expressed in HEK293T cells which are exposed to medium containing 32
P-orthophosphate. The 32
P-orthophosphate is assimilated by the cells after only a few hours of incubation and all molecules in the cell containing phosphates are thereby radioactively labeled. Via the affinity tag (3xflag) the LRRK proteins are isolated from other cellular components by immunoprecipitation. Immunoprecipitates are then separated via SDS-PAGE, blotted to PVDF membranes and analysis of the incorporated phosphates is performed by autoradiography (32
P signal) and western detection (protein signal) of the proteins on the blots. The protocol can readily be adapted to monitor phosphorylation of any other protein that can be expressed in cells and isolated by immunoprecipitation.
Cellular Biology, Issue 79, biology (general), biochemistry, bioengineering (general), LRRK1, LRRK2, metabolic labeling, 32P orthophosphate, immunoprecipitation, autoradiography
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Chemotactic Response of Marine Micro-Organisms to Micro-Scale Nutrient Layers
Institutions: MIT - Massachusetts Institute of Technology.
The degree to which planktonic microbes can exploit microscale resource patches will have considerable implications for oceanic trophodynamics and biogeochemical flux. However, to take advantage of nutrient patches in the ocean, swimming microbes must overcome the influences of physical forces including molecular diffusion and turbulent shear, which will limit the availability of patches and the ability of bacteria to locate them. Until recently, methodological limitations have precluded direct examinations of microbial behaviour within patchy habitats and realistic small-scale flow conditions. Hence, much of our current knowledge regarding microbial behaviour in the ocean has been procured from theoretical predictions. To obtain new information on microbial foraging behaviour in the ocean we have applied soft lithographic fabrication techniques to develop 2 microfluidic devices, which we have used to create (i) microscale nutrient patches with dimensions and diffusive characteristics relevant to oceanic processes and (ii) microscale vortices, with shear rates corresponding to those expected in the ocean. These microfluidic devices have permitted a first direct examination of microbial swimming and chemotactic behaviour within a heterogeneous and dynamic seascape. The combined use of epifluorescence and phase contrast microscopy allow direct examinations of the physical dimensions and diffusive characteristics of nutrient patches, while observing the population-level aggregative response, in addition to the swimming behaviour of individual microbes. These experiments have revealed that some species of phytoplankton, heterotrophic bacteria and phagotrophic protists are adept at locating and exploiting diffusing microscale resource patches within very short time frames. We have also shown that up to moderate shear rates, marine bacteria are able to fight the flow and swim through their environment at their own accord. However, beyond a threshold high shear level, bacteria are aligned in the shear flow and are less capable of swimming without disturbance from the flow. Microfluidics represents a novel and inexpensive approach for studying aquatic microbial ecology, and due to its suitability for accurately creating realistic flow fields and substrate gradients at the microscale, is ideally applicable to examinations of microbial behaviour at the smallest scales of interaction. We therefore suggest that microfluidics represents a valuable tool for obtaining a better understanding of the ecology of microorganisms in the ocean.
Microbiology, issue 4, microbial community, chemotaxis, microfluidics
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e.
, lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.
) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization.
Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25
. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7
with a multiobjective evolutionary algorithm SPEA226
, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Quantitatively Measuring In situ Flows using a Self-Contained Underwater Velocimetry Apparatus (SCUVA)
Institutions: Woods Hole Oceanographic Institution, Roger Williams University, Whitman Center, Providence College, California Institute of Technology.
The ability to directly measure velocity fields in a fluid environment is necessary to provide empirical data for studies in fields as diverse as oceanography, ecology, biology, and fluid mechanics. Field measurements introduce practical challenges such as environmental conditions, animal availability, and the need for field-compatible measurement techniques. To avoid these challenges, scientists typically use controlled laboratory environments to study animal-fluid interactions. However, it is reasonable to question whether one can extrapolate natural behavior (i.e., that which occurs in the field) from laboratory measurements. Therefore, in situ
quantitative flow measurements are needed to accurately describe animal swimming in their natural environment.
We designed a self-contained, portable device that operates independent of any connection to the surface, and can provide quantitative measurements of the flow field surrounding an animal. This apparatus, a self-contained underwater velocimetry apparatus (SCUVA), can be operated by a single scuba diver in depths up to 40 m. Due to the added complexity inherent of field conditions, additional considerations and preparation are required when compared to laboratory measurements. These considerations include, but are not limited to, operator motion, predicting position of swimming targets, available natural suspended particulate, and orientation of SCUVA relative to the flow of interest. The following protocol is intended to address these common field challenges and to maximize measurement success.
Bioengineering, Issue 56, In situ DPIV, SCUVA, animal flow measurements, zooplankton, propulsion
Seawater Sampling and Collection
Institutions: University of British Columbia - UBC.
This video documents methods for collecting coastal marine water samples and processing them for various downstream applications including biomass concentration, nucleic acid purification, cell abundance, nutrient and trace gas analyses. For today's demonstration samples were collected from the deck of the HMS John Strickland operating in Saanich Inlet. An A-frame derrick, with a multi-purpose winch and cable system, is used in combination with Niskin or Go-Flo water sampling bottles. Conductivity, Temperature, and Depth (CTD) sensors are also used to sample the underlying water mass. To minimize outgassing, trace gas samples are collected first. Then, nutrients, water chemistry, and cell counts are determined. Finally, waters are collected for biomass filtration. The set-up and collection time for a single cast is ~1.5 hours at a maximum depth of 215 meters. Therefore, a total of 6 hours is generally needed to complete the collection series described here.
Molecular Biology, Issue 28, microbial biomass, nucleic acids, nutrients, trace gas, ammonia, sulfide, seawater, fjord, hypoxic, Saanich Inlet