JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Quantifying uncertainties in N(2)O emission due to N fertilizer application in cultivated areas.
Nitrous oxide (N(2)O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO(2). In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N(2)O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N(2)O measurements. These models were characterized by (i) the presence or absence of the explanatory variable "applied N", (ii) the function relating N(2)O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N(2)O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N(2)O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha(-1). Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced.
Authors: Sarah M. Collier, Matthew D. Ruark, Lawrence G. Oates, William E. Jokela, Curtis J. Dell.
Published: 08-03-2014
Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies.
25 Related JoVE Articles!
Play Button
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Authors: Mackenzie J. Denyes, Michèle A. Parisien, Allison Rutter, Barbara A. Zeeb.
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
Play Button
Measurement and Analysis of Atomic Hydrogen and Diatomic Molecular AlO, C2, CN, and TiO Spectra Following Laser-induced Optical Breakdown
Authors: Christian G. Parigger, Alexander C. Woods, Michael J. Witte, Lauren D. Swafford, David M. Surmick.
Institutions: University of Tennessee Space Institute.
In this work, we present time-resolved measurements of atomic and diatomic spectra following laser-induced optical breakdown. A typical LIBS arrangement is used. Here we operate a Nd:YAG laser at a frequency of 10 Hz at the fundamental wavelength of 1,064 nm. The 14 nsec pulses with anenergy of 190 mJ/pulse are focused to a 50 µm spot size to generate a plasma from optical breakdown or laser ablation in air. The microplasma is imaged onto the entrance slit of a 0.6 m spectrometer, and spectra are recorded using an 1,800 grooves/mm grating an intensified linear diode array and optical multichannel analyzer (OMA) or an ICCD. Of interest are Stark-broadened atomic lines of the hydrogen Balmer series to infer electron density. We also elaborate on temperature measurements from diatomic emission spectra of aluminum monoxide (AlO), carbon (C2), cyanogen (CN), and titanium monoxide (TiO). The experimental procedures include wavelength and sensitivity calibrations. Analysis of the recorded molecular spectra is accomplished by the fitting of data with tabulated line strengths. Furthermore, Monte-Carlo type simulations are performed to estimate the error margins. Time-resolved measurements are essential for the transient plasma commonly encountered in LIBS.
Physics, Issue 84, Laser Induced Breakdown Spectroscopy, Laser Ablation, Molecular Spectroscopy, Atomic Spectroscopy, Plasma Diagnostics
Play Button
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky.
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
Play Button
Characterization of Recombination Effects in a Liquid Ionization Chamber Used for the Dosimetry of a Radiosurgical Accelerator
Authors: Antoine Wagner, Frederik Crop, Thomas Lacornerie, Nick Reynaert.
Institutions: Centre Oscar Lambret.
Most modern radiation therapy devices allow the use of very small fields, either through beamlets in Intensity-Modulated Radiation Therapy (IMRT) or via stereotactic radiotherapy where positioning accuracy allows delivering very high doses per fraction in a small volume of the patient. Dosimetric measurements on medical accelerators are conventionally realized using air-filled ionization chambers. However, in small beams these are subject to nonnegligible perturbation effects. This study focuses on liquid ionization chambers, which offer advantages in terms of spatial resolution and low fluence perturbation. Ion recombination effects are investigated for the microLion detector (PTW) used with the Cyberknife system (Accuray). The method consists of performing a series of water tank measurements at different source-surface distances, and applying corrections to the liquid detector readings based on simultaneous gaseous detector measurements. This approach facilitates isolating the recombination effects arising from the high density of the liquid sensitive medium and obtaining correction factors to apply to the detector readings. The main difficulty resides in achieving a sufficient level of accuracy in the setup to be able to detect small changes in the chamber response.
Physics, Issue 87, Radiation therapy, dosimetry, small fields, Cyberknife, liquid ionization, recombination effects
Play Button
A Simple and Rapid Protocol for Measuring Neutral Lipids in Algal Cells Using Fluorescence
Authors: Zachary J. Storms, Elliot Cameron, Hector de la Hoz Siegler, William C. McCaffrey.
Institutions: University of Alberta, University of Calgary.
Algae are considered excellent candidates for renewable fuel sources due to their natural lipid storage capabilities. Robust monitoring of algal fermentation processes and screening for new oil-rich strains requires a fast and reliable protocol for determination of intracellular lipid content. Current practices rely largely on gravimetric methods to determine oil content, techniques developed decades ago that are time consuming and require large sample volumes. In this paper, Nile Red, a fluorescent dye that has been used to identify the presence of lipid bodies in numerous types of organisms, is incorporated into a simple, fast, and reliable protocol for measuring the neutral lipid content of Auxenochlorella protothecoides, a green alga. The method uses ethanol, a relatively mild solvent, to permeabilize the cell membrane before staining and a 96 well micro-plate to increase sample capacity during fluorescence intensity measurements. It has been designed with the specific application of monitoring bioprocess performance. Previously dried samples or live samples from a growing culture can be used in the assay.
Chemistry, Issue 87, engineering (general), microbiology, bioengineering (general), Eukaryota Algae, Nile Red, Fluorescence, Oil Content, Oil Extraction, Oil Quantification, Neutral Lipids, Optical Microscope, biomass
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Play Button
A Protocol for Conducting Rainfall Simulation to Study Soil Runoff
Authors: Leonard C. Kibet, Louis S. Saporito, Arthur L. Allen, Eric B. May, Peter J. A. Kleinman, Fawzy M. Hashem, Ray B. Bryant.
Institutions: University of Maryland Eastern Shore, USDA - Agricultural Research Service, University of Maryland Eastern Shore.
Rainfall is a driving force for the transport of environmental contaminants from agricultural soils to surficial water bodies via surface runoff. The objective of this study was to characterize the effects of antecedent soil moisture content on the fate and transport of surface applied commercial urea, a common form of nitrogen (N) fertilizer, following a rainfall event that occurs within 24 hr after fertilizer application. Although urea is assumed to be readily hydrolyzed to ammonium and therefore not often available for transport, recent studies suggest that urea can be transported from agricultural soils to coastal waters where it is implicated in harmful algal blooms. A rainfall simulator was used to apply a consistent rate of uniform rainfall across packed soil boxes that had been prewetted to different soil moisture contents. By controlling rainfall and soil physical characteristics, the effects of antecedent soil moisture on urea loss were isolated. Wetter soils exhibited shorter time from rainfall initiation to runoff initiation, greater total volume of runoff, higher urea concentrations in runoff, and greater mass loadings of urea in runoff. These results also demonstrate the importance of controlling for antecedent soil moisture content in studies designed to isolate other variables, such as soil physical or chemical characteristics, slope, soil cover, management, or rainfall characteristics. Because rainfall simulators are designed to deliver raindrops of similar size and velocity as natural rainfall, studies conducted under a standardized protocol can yield valuable data that, in turn, can be used to develop models for predicting the fate and transport of pollutants in runoff.
Environmental Sciences, Issue 86, Agriculture, Water Pollution, Water Quality, Technology, Industry, and Agriculture, Rainfall Simulator, Artificial Rainfall, Runoff, Packed Soil Boxes, Nonpoint Source, Urea
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Luminescence Resonance Energy Transfer to Study Conformational Changes in Membrane Proteins Expressed in Mammalian Cells
Authors: Drew M. Dolino, Swarna S. Ramaswamy, Vasanthi Jayaraman.
Institutions: University of Texas Health Science Center at Houston.
Luminescence Resonance Energy Transfer, or LRET, is a powerful technique used to measure distances between two sites in proteins within the distance range of 10-100 Å. By measuring the distances under various ligated conditions, conformational changes of the protein can be easily assessed. With LRET, a lanthanide, most often chelated terbium, is used as the donor fluorophore, affording advantages such as a longer donor-only emission lifetime, the flexibility to use multiple acceptor fluorophores, and the opportunity to detect sensitized acceptor emission as an easy way to measure energy transfer without the risk of also detecting donor-only signal. Here, we describe a method to use LRET on membrane proteins expressed and assayed on the surface of intact mammalian cells. We introduce a protease cleavage site between the LRET fluorophore pair. After obtaining the original LRET signal, cleavage at that site removes the specific LRET signal from the protein of interest allowing us to quantitatively subtract the background signal that remains after cleavage. This method allows for more physiologically relevant measurements to be made without the need for purification of protein.
Bioengineering, Issue 91, LRET, FRET, Luminescence Resonance Energy Transfer, Fluorescence Resonance Energy Transfer, glutamate receptors, acid sensing ion channel, protein conformation, protein dynamics, fluorescence, protein-protein interactions
Play Button
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Authors: Daniel E. Bradford, Katherine P. Magruder, Rachel A. Korhumel, John J. Curtin.
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
Play Button
Use of Stopped-Flow Fluorescence and Labeled Nucleotides to Analyze the ATP Turnover Cycle of Kinesins
Authors: Jennifer T. Patel, Hannah R. Belsham, Alexandra J. Rathbone, Claire T. Friel.
Institutions: University of Nottingham.
The kinesin superfamily of microtubule associated motor proteins share a characteristic motor domain which both hydrolyses ATP and binds microtubules. Kinesins display differences across the superfamily both in ATP turnover and in microtubule interaction. These differences tailor specific kinesins to various functions such as cargo transport, microtubule sliding, microtubule depolymerization and microtubule stabilization. To understand the mechanism of action of a kinesin it is important to understand how the chemical cycle of ATP turnover is coupled to the mechanical cycle of microtubule interaction. To dissect the ATP turnover cycle, one approach is to utilize fluorescently labeled nucleotides to visualize individual steps in the cycle. Determining the kinetics of each nucleotide transition in the ATP turnover cycle allows the rate-limiting step or steps for the complete cycle to be identified. For a kinesin, it is important to know the rate-limiting step, in the absence of microtubules, as this step is generally accelerated several thousand fold when the kinesin interacts with microtubules. The cycle in the absence of microtubules is then compared to that in the presence of microtubules to fully understand a kinesin’s ATP turnover cycle. The kinetics of individual nucleotide transitions are generally too fast to observe by manually mixing reactants, particularly in the presence of microtubules. A rapid mixing device, such as a stopped-flow fluorimeter, which allows kinetics to be observed on timescales of as little as a few milliseconds, can be used to monitor such transitions. Here, we describe protocols in which rapid mixing of reagents by stopped-flow is used in conjunction with fluorescently labeled nucleotides to dissect the ATP turnover cycle of a kinesin.
Chemistry, Issue 92, Kinesin, ATP turnover, mantATP, mantADP, stopped-flow fluorescence, microtubules, enzyme kinetics, nucleotide
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Authors: Moneim Shamloul, Jason Trusa, Vadim Mett, Vidadi Yusibov.
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
Agrobacterium-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana, N. excelsiana (N. benthamiana × N. excelsior) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Play Button
Fluorescence Lifetime Imaging of Molecular Rotors in Living Cells
Authors: Klaus Suhling, James A. Levitt, Pei- Hua Chung, Marina. K. Kuimova, Gokhan Yahioglu.
Institutions: King's College London, Imperial College London , PhotoBiotics Ltd.
Diffusion is often an important rate-determining step in chemical reactions or biological processes and plays a role in a wide range of intracellular events. Viscosity is one of the key parameters affecting the diffusion of molecules and proteins, and changes in viscosity have been linked to disease and malfunction at the cellular level.1-3 While methods to measure the bulk viscosity are well developed, imaging microviscosity remains a challenge. Viscosity maps of microscopic objects, such as single cells, have until recently been hard to obtain. Mapping viscosity with fluorescence techniques is advantageous because, similar to other optical techniques, it is minimally invasive, non-destructive and can be applied to living cells and tissues. Fluorescent molecular rotors exhibit fluorescence lifetimes and quantum yields which are a function of the viscosity of their microenvironment.4,5 Intramolecular twisting or rotation leads to non-radiative decay from the excited state back to the ground state. A viscous environment slows this rotation or twisting, restricting access to this non-radiative decay pathway. This leads to an increase in the fluorescence quantum yield and the fluorescence lifetime. Fluorescence Lifetime Imaging (FLIM) of modified hydrophobic BODIPY dyes that act as fluorescent molecular rotors show that the fluorescence lifetime of these probes is a function of the microviscosity of their environment.6-8 A logarithmic plot of the fluorescence lifetime versus the solvent viscosity yields a straight line that obeys the Förster Hoffman equation.9 This plot also serves as a calibration graph to convert fluorescence lifetime into viscosity. Following incubation of living cells with the modified BODIPY fluorescent molecular rotor, a punctate dye distribution is observed in the fluorescence images. The viscosity value obtained in the puncta in live cells is around 100 times higher than that of water and of cellular cytoplasm.6,7 Time-resolved fluorescence anisotropy measurements yield rotational correlation times in agreement with these large microviscosity values. Mapping the fluorescence lifetime is independent of the fluorescence intensity, and thus allows the separation of probe concentration and viscosity effects. In summary, we have developed a practical and versatile approach to map the microviscosity in cells based on FLIM of fluorescent molecular rotors.
Bioengineering, Issue 60, fluorescence, microscopy, FLIM, fluorescent molecular rotors
Play Button
Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods
Authors: Ifat Levy, Lior Rosenberg Belmaker, Kirk Manson, Agnieszka Tymula, Paul W. Glimcher.
Institutions: Yale School of Medicine, Yale School of Medicine, New York University , New York University , New York University .
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity1,2, the degree of those aversions vary substantially across individuals, such that the subjective value of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method3 to assess the neural representation of the subjective values of risky and ambiguous options4. This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations. In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Neuroscience, Issue 67, Medicine, Molecular Biology, fMRI, magnetic resonance imaging, decision-making, value, uncertainty, risk, ambiguity
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Characterization of Surface Modifications by White Light Interferometry: Applications in Ion Sputtering, Laser Ablation, and Tribology Experiments
Authors: Sergey V. Baryshev, Robert A. Erck, Jerry F. Moore, Alexander V. Zinovev, C. Emil Tripa, Igor V. Veryovkin.
Institutions: Argonne National Laboratory, Argonne National Laboratory, MassThink LLC.
In materials science and engineering it is often necessary to obtain quantitative measurements of surface topography with micrometer lateral resolution. From the measured surface, 3D topographic maps can be subsequently analyzed using a variety of software packages to extract the information that is needed. In this article we describe how white light interferometry, and optical profilometry (OP) in general, combined with generic surface analysis software, can be used for materials science and engineering tasks. In this article, a number of applications of white light interferometry for investigation of surface modifications in mass spectrometry, and wear phenomena in tribology and lubrication are demonstrated. We characterize the products of the interaction of semiconductors and metals with energetic ions (sputtering), and laser irradiation (ablation), as well as ex situ measurements of wear of tribological test specimens. Specifically, we will discuss: Aspects of traditional ion sputtering-based mass spectrometry such as sputtering rates/yields measurements on Si and Cu and subsequent time-to-depth conversion. Results of quantitative characterization of the interaction of femtosecond laser irradiation with a semiconductor surface. These results are important for applications such as ablation mass spectrometry, where the quantities of evaporated material can be studied and controlled via pulse duration and energy per pulse. Thus, by determining the crater geometry one can define depth and lateral resolution versus experimental setup conditions. Measurements of surface roughness parameters in two dimensions, and quantitative measurements of the surface wear that occur as a result of friction and wear tests. Some inherent drawbacks, possible artifacts, and uncertainty assessments of the white light interferometry approach will be discussed and explained.
Materials Science, Issue 72, Physics, Ion Beams (nuclear interactions), Light Reflection, Optical Properties, Semiconductor Materials, White Light Interferometry, Ion Sputtering, Laser Ablation, Femtosecond Lasers, Depth Profiling, Time-of-flight Mass Spectrometry, Tribology, Wear Analysis, Optical Profilometry, wear, friction, atomic force microscopy, AFM, scanning electron microscopy, SEM, imaging, visualization
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling
Authors: Jennifer L Soong, Dan Reuss, Colin Pinney, Ty Boyack, Michelle L Haddix, Catherine E Stewart, M. Francesca Cotrufo.
Institutions: Colorado State University, USDA-ARS, Colorado State University.
Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2 fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13C with 15N, 18O or 2H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4. From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e. leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7. The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13C and 15N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components. Here, we present the construction and operation of a continuous 13C and 15N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13C and 6.7 atom%15N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13C and 0.56 atom%15N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2 concentration, and light levels in an airtight 13C-CO2 atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling.
Environmental Sciences, Issue 83, 13C, 15N, plant, stable isotope labeling, Andropogon gerardii, metabolic compounds, structural compounds, hot water extraction
Play Button
Visualizing Protein-DNA Interactions in Live Bacterial Cells Using Photoactivated Single-molecule Tracking
Authors: Stephan Uphoff, David J. Sherratt, Achillefs N. Kapanidis.
Institutions: University of Oxford, University of Oxford.
Protein-DNA interactions are at the heart of many fundamental cellular processes. For example, DNA replication, transcription, repair, and chromosome organization are governed by DNA-binding proteins that recognize specific DNA structures or sequences. In vitro experiments have helped to generate detailed models for the function of many types of DNA-binding proteins, yet, the exact mechanisms of these processes and their organization in the complex environment of the living cell remain far less understood. We recently introduced a method for quantifying DNA-repair activities in live Escherichia coli cells using Photoactivated Localization Microscopy (PALM) combined with single-molecule tracking. Our general approach identifies individual DNA-binding events by the change in the mobility of a single protein upon association with the chromosome. The fraction of bound molecules provides a direct quantitative measure for the protein activity and abundance of substrates or binding sites at the single-cell level. Here, we describe the concept of the method and demonstrate sample preparation, data acquisition, and data analysis procedures.
Immunology, Issue 85, Super-resolution microscopy, single-particle tracking, Live-cell imaging, DNA-binding proteins, DNA repair, molecular diffusion
Play Button
Measurement of Leaf Hydraulic Conductance and Stomatal Conductance and Their Responses to Irradiance and Dehydration Using the Evaporative Flux Method (EFM)
Authors: Lawren Sack, Christine Scoffoni.
Institutions: University of California, Los Angeles .
Water is a key resource, and the plant water transport system sets limits on maximum growth and drought tolerance. When plants open their stomata to achieve a high stomatal conductance (gs) to capture CO2 for photosynthesis, water is lost by transpiration1,2. Water evaporating from the airspaces is replaced from cell walls, in turn drawing water from the xylem of leaf veins, in turn drawing from xylem in the stems and roots. As water is pulled through the system, it experiences hydraulic resistance, creating tension throughout the system and a low leaf water potential (Ψleaf). The leaf itself is a critical bottleneck in the whole plant system, accounting for on average 30% of the plant hydraulic resistance3. Leaf hydraulic conductance (Kleaf = 1/ leaf hydraulic resistance) is the ratio of the water flow rate to the water potential gradient across the leaf, and summarizes the behavior of a complex system: water moves through the petiole and through several orders of veins, exits into the bundle sheath and passes through or around mesophyll cells before evaporating into the airspace and being transpired from the stomata. Kleaf is of strong interest as an important physiological trait to compare species, quantifying the effectiveness of the leaf structure and physiology for water transport, and a key variable to investigate for its relationship to variation in structure (e.g., in leaf venation architecture) and its impacts on photosynthetic gas exchange. Further, Kleaf responds strongly to the internal and external leaf environment3. Kleaf can increase dramatically with irradiance apparently due to changes in the expression and activation of aquaporins, the proteins involved in water transport through membranes4, and Kleaf declines strongly during drought, due to cavitation and/or collapse of xylem conduits, and/or loss of permeability in the extra-xylem tissues due to mesophyll and bundle sheath cell shrinkage or aquaporin deactivation5-10. Because Kleaf can constrain gs and photosynthetic rate across species in well watered conditions and during drought, and thus limit whole-plant performance they may possibly determine species distributions especially as droughts increase in frequency and severity11-14. We present a simple method for simultaneous determination of Kleaf and gs on excised leaves. A transpiring leaf is connected by its petiole to tubing running to a water source on a balance. The loss of water from the balance is recorded to calculate the flow rate through the leaf. When steady state transpiration (E, mmol • m-2 • s-1) is reached, gs is determined by dividing by vapor pressure deficit, and Kleaf by dividing by the water potential driving force determined using a pressure chamber (Kleaf= E /- Δψleaf, MPa)15. This method can be used to assess Kleaf responses to different irradiances and the vulnerability of Kleaf to dehydration14,16,17.
Plant Biology, Issue 70, Molecular Biology, Physiology, Ecology, Biology, Botany, Leaf traits, hydraulics, stomata, transpiration, xylem, conductance, leaf hydraulic conductance, resistance, evaporative flux method, whole plant
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.