Nuclear magnetic resonance (NMR) spectroscopy and imaging (MRI) suffer from intrinsic low sensitivity because even strong external magnetic fields of ~10 T generate only a small detectable net-magnetization of the sample at room temperature 1. Hence, most NMR and MRI applications rely on the detection of molecules at relative high concentration (e.g., water for imaging of biological tissue) or require excessive acquisition times. This limits our ability to exploit the very useful molecular specificity of NMR signals for many biochemical and medical applications. However, novel approaches have emerged in the past few years: Manipulation of the detected spin species prior to detection inside the NMR/MRI magnet can dramatically increase the magnetization and therefore allows detection of molecules at much lower concentration 2.
Here, we present a method for polarization of a xenon gas mixture (2-5% Xe, 10% N2, He balance) in a compact setup with a ca. 16000-fold signal enhancement. Modern line-narrowed diode lasers allow efficient polarization 7 and immediate use of gas mixture even if the noble gas is not separated from the other components. The SEOP apparatus is explained and determination of the achieved spin polarization is demonstrated for performance control of the method.
The hyperpolarized gas can be used for void space imaging, including gas flow imaging or diffusion studies at the interfaces with other materials 8,9. Moreover, the Xe NMR signal is extremely sensitive to its molecular environment 6. This enables the option to use it as an NMR/MRI contrast agent when dissolved in aqueous solution with functionalized molecular hosts that temporarily trap the gas 10,11. Direct detection and high-sensitivity indirect detection of such constructs is demonstrated in both spectroscopic and imaging mode.
25 Related JoVE Articles!
A Novel Rescue Technique for Difficult Intubation and Difficult Ventilation
Institutions: Children’s Hospital of Michigan, St. Jude Children’s Research Hospital.
We describe a novel non surgical technique to maintain oxygenation and ventilation in a case of difficult intubation and difficult ventilation, which works especially well with poor mask fit.
Can not intubate, can not ventilate" (CICV) is a potentially life threatening situation. In this video we present a simulation of the technique we used in a case of CICV where oxygenation and ventilation were maintained by inserting an endotracheal tube (ETT) nasally down to the level of the naso-pharynx while sealing the mouth and nares for successful positive pressure ventilation.
A 13 year old patient was taken to the operating room for incision and drainage of a neck abcess and direct laryngobronchoscopy. After preoxygenation, anesthesia was induced intravenously. Mask ventilation was found to be extremely difficult because of the swelling of the soft tissue. The face mask could not fit properly on the face due to significant facial swelling as well. A direct laryngoscopy was attempted with no visualization of the larynx. Oxygen saturation was difficult to maintain, with saturations falling to 80%. In order to oxygenate and ventilate the patient, an endotracheal tube was then inserted nasally after nasal spray with nasal decongestant and lubricant. The tube was pushed gently and blindly into the hypopharynx. The mouth and nose of the patient were sealed by hand and positive pressure ventilation was possible with 100% O2
with good oxygen saturation during that period of time. Once the patient was stable and well sedated, a rigid bronchoscope was introduced by the otolaryngologist showing extensive subglottic and epiglottic edema, and a mass effect from the abscess, contributing to the airway compromise. The airway was secured with an ETT tube by the otolaryngologist.This video will show a simulation of the technique on a patient undergoing general anesthesia for dental restorations.
Medicine, Issue 47, difficult ventilation, difficult intubation, nasal, saturation
Evaluation of Integrated Anaerobic Digestion and Hydrothermal Carbonization for Bioenergy Production
Institutions: Leibniz Institute for Agricultural Engineering.
Lignocellulosic biomass is one of the most abundant yet underutilized renewable energy resources. Both anaerobic digestion (AD) and hydrothermal carbonization (HTC) are promising technologies for bioenergy production from biomass in terms of biogas and HTC biochar, respectively. In this study, the combination of AD and HTC is proposed to increase overall bioenergy production. Wheat straw was anaerobically digested in a novel upflow anaerobic solid state reactor (UASS) in both mesophilic (37 °C) and thermophilic (55 °C) conditions. Wet digested from thermophilic AD was hydrothermally carbonized at 230 °C for 6 hr for HTC biochar production. At thermophilic temperature, the UASS system yields an average of 165 LCH4
(VS: volatile solids) and 121 L CH4
at mesophilic AD over the continuous operation of 200 days. Meanwhile, 43.4 g of HTC biochar with 29.6 MJ/kgdry_biochar
was obtained from HTC of 1 kg digestate (dry basis) from mesophilic AD. The combination of AD and HTC, in this particular set of experiment yield 13.2 MJ of energy per 1 kg of dry wheat straw, which is at least 20% higher than HTC alone and 60.2% higher than AD only.
Environmental Sciences, Issue 88, Biomethane, Hydrothermal Carbonization (HTC), Calorific Value, Lignocellulosic Biomass, UASS, Anaerobic Digestion
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Metabolomic Analysis of Rat Brain by High Resolution Nuclear Magnetic Resonance Spectroscopy of Tissue Extracts
Institutions: Aix-Marseille Université, Aix-Marseille Université.
Studies of gene expression on the RNA and protein levels have long been used to explore biological processes underlying disease. More recently, genomics and proteomics have been complemented by comprehensive quantitative analysis of the metabolite pool present in biological systems. This strategy, termed metabolomics, strives to provide a global characterization of the small-molecule complement involved in metabolism. While the genome and the proteome define the tasks cells can perform, the metabolome is part of the actual phenotype. Among the methods currently used in metabolomics, spectroscopic techniques are of special interest because they allow one to simultaneously analyze a large number of metabolites without prior selection for specific biochemical pathways, thus enabling a broad unbiased approach. Here, an optimized experimental protocol for metabolomic analysis by high-resolution NMR spectroscopy is presented, which is the method of choice for efficient quantification of tissue metabolites. Important strengths of this method are (i) the use of crude extracts, without the need to purify the sample and/or separate metabolites; (ii) the intrinsically quantitative nature of NMR, permitting quantitation of all metabolites represented by an NMR spectrum with one reference compound only; and (iii) the nondestructive nature of NMR enabling repeated use of the same sample for multiple measurements. The dynamic range of metabolite concentrations that can be covered is considerable due to the linear response of NMR signals, although metabolites occurring at extremely low concentrations may be difficult to detect. For the least abundant compounds, the highly sensitive mass spectrometry method may be advantageous although this technique requires more intricate sample preparation and quantification procedures than NMR spectroscopy. We present here an NMR protocol adjusted to rat brain analysis; however, the same protocol can be applied to other tissues with minor modifications.
Neuroscience, Issue 91, metabolomics, brain tissue, rodents, neurochemistry, tissue extracts, NMR spectroscopy, quantitative metabolite analysis, cerebral metabolism, metabolic profile
Analysis of Volatile and Oxidation Sensitive Compounds Using a Cold Inlet System and Electron Impact Mass Spectrometry
Institutions: Bielefeld University.
This video presents a protocol for the mass spectrometrical analysis of volatile and oxidation sensitive compounds using electron impact ionization. The analysis of volatile and oxidation sensitive compounds by mass spectrometry is not easily achieved, as all state-of-the-art mass spectrometric methods require at least one sample preparation step, e.g.
, dissolution and dilution of the analyte (electrospray ionization), co-crystallization of the analyte with a matrix compound (matrix-assisted laser desorption/ionization), or transfer of the prepared samples into the ionization source of the mass spectrometer, to be conducted under atmospheric conditions. Here, the use of a sample inlet system is described which enables the analysis of volatile metal organyls, silanes, and phosphanes using a sector field mass spectrometer equipped with an electron impact ionization source. All sample preparation steps and the sample introduction into the ion source of the mass spectrometer take place either under air-free conditions or under vacuum, enabling the analysis of compounds highly susceptible to oxidation. The presented technique is especially of interest for inorganic chemists, working with metal organyls, silanes, or phosphanes, which have to be handled using inert conditions, such as the Schlenk technique. The principle of operation is presented in this video.
Chemistry, Issue 91, mass spectrometry, electron impact, inlet system, volatile, air sensitive
Workflow for High-content, Individual Cell Quantification of Fluorescent Markers from Universal Microscope Data, Supported by Open Source Software
Institutions: UCL Cancer Institute.
Advances in understanding the control mechanisms governing the behavior of cells in adherent mammalian tissue culture models are becoming increasingly dependent on modes of single-cell analysis. Methods which deliver composite data reflecting the mean values of biomarkers from cell populations risk losing subpopulation dynamics that reflect the heterogeneity of the studied biological system. In keeping with this, traditional approaches are being replaced by, or supported with, more sophisticated forms of cellular assay developed to allow assessment by high-content microscopy. These assays potentially generate large numbers of images of fluorescent biomarkers, which enabled by accompanying proprietary software packages, allows for multi-parametric measurements per cell. However, the relatively high capital costs and overspecialization of many of these devices have prevented their accessibility to many investigators.
Described here is a universally applicable workflow for the quantification of multiple fluorescent marker intensities from specific subcellular regions of individual cells suitable for use with images from most fluorescent microscopes. Key to this workflow is the implementation of the freely available Cell Profiler software1
to distinguish individual cells in these images, segment them into defined subcellular regions and deliver fluorescence marker intensity values specific to these regions. The extraction of individual cell intensity values from image data is the central purpose of this workflow and will be illustrated with the analysis of control data from a siRNA screen for G1 checkpoint regulators in adherent human cells. However, the workflow presented here can be applied to analysis of data from other means of cell perturbation (e.g.
, compound screens) and other forms of fluorescence based cellular markers and thus should be useful for a wide range of laboratories.
Cellular Biology, Issue 94, Image analysis, High-content analysis, Screening, Microscopy, Individual cell analysis, Multiplexed assays
Technique of Porcine Liver Procurement and Orthotopic Transplantation using an Active Porto-Caval Shunt
Institutions: Toronto General Hospital.
The success of liver transplantation has resulted in a dramatic organ shortage. Each year, a considerable number of patients on the liver transplantation waiting list die without receiving an organ transplant or are delisted due to disease progression. Even after a successful transplantation, rejection and side effects of immunosuppression remain major concerns for graft survival and patient morbidity.
Experimental animal research has been essential to the success of liver transplantation and still plays a pivotal role in the development of clinical transplantation practice. In particular, the porcine orthotopic liver transplantation model (OLTx) is optimal for clinically oriented research for its close resemblance to human size, anatomy, and physiology.
Decompression of intestinal congestion during the anhepatic phase of porcine OLTx is important to guarantee reliable animal survival. The use of an active porto-caval-jugular shunt achieves excellent intestinal decompression. The system can be used for short-term as well as long-term survival experiments. The following protocol contains all technical information for a stable and reproducible liver transplantation model in pigs including post-operative animal care.
Medicine, Issue 99, Orthotopic Liver Transplantation, Hepatic, Porcine Model, Pig, Experimental, Transplantation, Graft Preservation, Ischemia Reperfusion Injury, Transplant Immunology, Bile Duct Reconstruction, Animal Handling
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g.
carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
Electrochemically and Bioelectrochemically Induced Ammonium Recovery
Institutions: Ghent University, Rutgers University.
Streams such as urine and manure can contain high levels of ammonium, which could be recovered for reuse in agriculture or chemistry. The extraction of ammonium from an ammonium-rich stream is demonstrated using an electrochemical and a bioelectrochemical system. Both systems are controlled by a potentiostat to either fix the current (for the electrochemical cell) or fix the potential of the working electrode (for the bioelectrochemical cell). In the bioelectrochemical cell, electroactive bacteria catalyze the anodic reaction, whereas in the electrochemical cell the potentiostat applies a higher voltage to produce a current. The current and consequent restoration of the charge balance across the cell allow the transport of cations, such as ammonium, across a cation exchange membrane from the anolyte to the catholyte. The high pH of the catholyte leads to formation of ammonia, which can be stripped from the medium and captured in an acid solution, thus enabling the recovery of a valuable nutrient. The flux of ammonium across the membrane is characterized at different anolyte ammonium concentrations and currents for both the abiotic and biotic reactor systems. Both systems are compared based on current and removal efficiencies for ammonium, as well as the energy input required to drive ammonium transfer across the cation exchange membrane. Finally, a comparative analysis considering key aspects such as reliability, electrode cost, and rate is made.
This video article and protocol provide the necessary information to conduct electrochemical and bioelectrochemical ammonia recovery experiments. The reactor setup for the two cases is explained, as well as the reactor operation. We elaborate on data analysis for both reactor types and on the advantages and disadvantages of bioelectrochemical and electrochemical systems.
Chemistry, Issue 95, Electrochemical extraction, bioelectrochemical system, bioanode, ammonium recovery, microbial electrocatalysis, nutrient recovery, electrolysis cell
Making Record-efficiency SnS Solar Cells by Thermal Evaporation and Atomic Layer Deposition
Institutions: Massachusetts Institute of Technology, Massachusetts Institute of Technology, Harvard University, Massachusetts Institute of Technology, Harvard University.
Tin sulfide (SnS) is a candidate absorber material for Earth-abundant, non-toxic solar cells. SnS offers easy phase control and rapid growth by congruent thermal evaporation, and it absorbs visible light strongly. However, for a long time the record power conversion efficiency of SnS solar cells remained below 2%. Recently we demonstrated new certified record efficiencies of 4.36% using SnS deposited by atomic layer deposition, and 3.88% using thermal evaporation. Here the fabrication procedure for these record solar cells is described, and the statistical distribution of the fabrication process is reported. The standard deviation of efficiency measured on a single substrate is typically over 0.5%. All steps including substrate selection and cleaning, Mo sputtering for the rear contact (cathode), SnS deposition, annealing, surface passivation, Zn(O,S) buffer layer selection and deposition, transparent conductor (anode) deposition, and metallization are described. On each substrate we fabricate 11 individual devices, each with active area 0.25 cm2
. Further, a system for high throughput measurements of current-voltage curves under simulated solar light, and external quantum efficiency measurement with variable light bias is described. With this system we are able to measure full data sets on all 11 devices in an automated manner and in minimal time. These results illustrate the value of studying large sample sets, rather than focusing narrowly on the highest performing devices. Large data sets help us to distinguish and remedy individual loss mechanisms affecting our devices.
Engineering, Issue 99, Solar cells, thin films, thermal evaporation, atomic layer deposition, annealing, tin sulfide
Metal-silicate Partitioning at High Pressure and Temperature: Experimental Methods and a Protocol to Suppress Highly Siderophile Element Inclusions
Institutions: University of Toronto, Carnegie Institution of Washington.
Estimates of the primitive upper mantle (PUM) composition reveal a depletion in many of the siderophile (iron-loving) elements, thought to result from their extraction to the core during terrestrial accretion. Experiments to investigate the partitioning of these elements between metal and silicate melts suggest that the PUM composition is best matched if metal-silicate equilibrium occurred at high pressures and temperatures, in a deep magma ocean environment. The behavior of the most highly siderophile elements (HSEs) during this process however, has remained enigmatic. Silicate run-products from HSE solubility experiments are commonly contaminated by dispersed metal inclusions that hinder the measurement of element concentrations in the melt. The resulting uncertainty over the true solubility and metal-silicate partitioning of these elements has made it difficult to predict their expected depletion in PUM. Recently, several studies have employed changes to the experimental design used for high pressure and temperature solubility experiments in order to suppress the formation of metal inclusions. The addition of Au (Re, Os, Ir, Ru experiments) or elemental Si (Pt experiments) to the sample acts to alter either the geometry or rate of sample reduction respectively, in order to avoid transient metal oversaturation of the silicate melt. This contribution outlines procedures for using the piston-cylinder and multi-anvil apparatus to conduct solubility and metal-silicate partitioning experiments respectively. A protocol is also described for the synthesis of uncontaminated run-products from HSE solubility experiments in which the oxygen fugacity is similar to that during terrestrial core-formation. Time-resolved LA-ICP-MS spectra are presented as evidence for the absence of metal-inclusions in run-products from earlier studies, and also confirm that the technique may be extended to investigate Ru. Examples are also given of how these data may be applied.
Chemistry, Issue 100, siderophile elements, geoengineering, primitive upper mantle (PUM), HSEs, terrestrial accretion
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO2
) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
In Situ SIMS and IR Spectroscopy of Well-defined Surfaces Prepared by Soft Landing of Mass-selected Ions
Institutions: Pacific Northwest National Laboratory.
Soft landing of mass-selected ions onto surfaces is a powerful approach for the highly-controlled preparation of materials that are inaccessible using conventional synthesis techniques. Coupling soft landing with in situ
characterization using secondary ion mass spectrometry (SIMS) and infrared reflection absorption spectroscopy (IRRAS) enables analysis of well-defined surfaces under clean vacuum conditions. The capabilities of three soft-landing instruments constructed in our laboratory are illustrated for the representative system of surface-bound organometallics prepared by soft landing of mass-selected ruthenium tris(bipyridine) dications, [Ru(bpy)3
(bpy = bipyridine), onto carboxylic acid terminated self-assembled monolayer surfaces on gold (COOH-SAMs). In situ
time-of-flight (TOF)-SIMS provides insight into the reactivity of the soft-landed ions. In addition, the kinetics of charge reduction, neutralization and desorption occurring on the COOH-SAM both during and after ion soft landing are studied using in situ
Fourier transform ion cyclotron resonance (FT-ICR)-SIMS measurements. In situ
IRRAS experiments provide insight into how the structure of organic ligands surrounding metal centers is perturbed through immobilization of organometallic ions on COOH-SAM surfaces by soft landing. Collectively, the three instruments provide complementary information about the chemical composition, reactivity and structure of well-defined species supported on surfaces.
Chemistry, Issue 88, soft landing, mass selected ions, electrospray, secondary ion mass spectrometry, infrared spectroscopy, organometallic, catalysis
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Cerebral Blood Oxygenation Measurement Based on Oxygen-dependent Quenching of Phosphorescence
Institutions: Massachusetts General Hospital and Harvard Medical School, University of Pennsylvania, Massachusetts General Hospital and Harvard Medical School, University of California.
Monitoring of the spatiotemporal characteristics of cerebral blood and tissue oxygenation is crucial for better understanding of the
neuro-metabolic-vascular relationship. Development of new pO2 measurement modalities with simultaneous monitoring of pO2 in larger fields of view with higher
spatial and/or temporal resolution will enable greater insight into the functioning of the normal brain and will also have significant impact on diagnosis
and treatment of neurovascular diseases such as stroke, Alzheimer's disease, and head injury.
Optical imaging modalities have shown a great potential to provide high spatiotemporal resolution and quantitative imaging of pO2 based on hemoglobin absorption in visible and near infrared range of optical spectrum. However, multispectral measurement of cerebral blood oxygenation relies on photon migration through the highly scattering brain tissue. Estimation and modeling of tissue optical parameters, which may undergo dynamic changes during the experiment, is typically required for accurate estimation of blood oxygenation. On the other hand, estimation of the partial pressure of oxygen (pO2) based on oxygen-dependent quenching of phosphorescence should not be significantly affected by the changes in the optical parameters of the tissue and provides an absolute measure of pO2. Experimental systems that utilize oxygen-sensitive dyes have been demonstrated in in vivo studies of the perfused tissue as well as for monitoring the oxygen content in tissue cultures, showing that phosphorescence quenching is a potent technology capable of accurate oxygen imaging in the physiological pO2 range.
Here we demonstrate with two different imaging modalities how to perform measurement of pO2 in cortical vasculature based on phosphorescence lifetime imaging. In first demonstration we present wide field of view imaging of pO2 at the cortical surface of a rat. This imaging modality has relatively simple experimental setup based on a CCD camera and a pulsed green laser. An example of monitoring the cortical spreading depression based on phosphorescence lifetime of Oxyphor R3 dye was presented. In second demonstration we present a high resolution two-photon pO2 imaging in cortical micro vasculature of a mouse. The experimental setup includes a custom built 2-photon microscope with femtosecond laser, electro-optic modulator, and photon-counting photo multiplier tube. We present an example of imaging the pO2 heterogeneity in the cortical microvasculature including capillaries, using a novel PtP-C343 dye with enhanced 2-photon excitation cross section.
Click here to view the related article Synthesis and Calibration of Phosphorescent Nanoprobes for Oxygen Imaging in Biological Systems.
Neuroscience, Issue 51, brain, blood, oxygenation, phosphorescence, imaging
Fabrication and Operation of an Oxygen Insert for Adherent Cellular Cultures
Institutions: University of Illinois.
Oxygen is a key modulator of many cellular pathways, but current devices permitting in vitro
oxygen modulation fail to meet the needs of biomedical research. The hypoxic chamber offers a simple system to control oxygenation in standard culture vessels, but lacks precise temporal and spatial control over the oxygen concentration at the cell surface, preventing its application in studying a variety of physiological phenomena. Other systems have improved upon the hypoxic chamber, but require specialized knowledge and equipment for their operation, making them intimidating for the average researcher. A microfabricated insert for multiwell plates has been developed to more effectively control the temporal and spatial oxygen concentration to better model physiological phenomena found in vivo
. The platform consists of a polydimethylsiloxane insert that nests into a standard multiwell plate and serves as a passive microfluidic gas network with a gas-permeable membrane aimed to modulate oxygen delivery to adherent cells. The device is simple to use and is connected to gas cylinders that provide the pressure to introduce the desired oxygen concentration into the platform. Fabrication involves a combination of standard SU-8 photolithography, replica molding, and defined PDMS spinning on a silicon wafer. The components of the device are bonded after surface treatment using a hand-held plasma system. Validation is accomplished with a planar fluorescent oxygen sensor. Equilibration time is on the order of minutes and a wide variety of oxygen profiles can be attained based on the device design, such as the cyclic profile achieved in this study, and even oxygen gradients to mimic those found in vivo
. The device can be sterilized for cell culture using common methods without loss of function. The device's applicability to studying the in vitro
wound healing response will be demonstrated.
Cellular Biology, Issue 35, hypoxia, cell, culture, control, wound, healing, oxygen, microfluidic device, bioengineering
Closed System Cell Culture Protocol Using HYPERStack Vessels with Gas Permeable Material Technology
Institutions: Corning Life Science, Corning Life Science, Corning Life Science.
Large volume adherent cell culture is currently standardized on stacked plate cell growth products when microcarrier beads are not an optimal choice. HYPERStack vessels allow closed system scale up from the current stacked plate products and delivers >2.5X more cells in the same volumetric footprint. The HYPERStack vessels function via gas permeable material which allows gas exchange to occur, therefore eliminating the need for internal headspace within a vessel. The elimination of headspace allows the compartment where cell growth occurs to be minimized to reduce space, allowing more layers of cell growth surface area within the same volumetric footprint.
For many applications such as cell therapy or vaccine production, a closed system is required for cell growth and harvesting. The HYPERStack vessel allows cell and reagent addition and removal via tubing from media bags or other methods.
This protocol will explain the technology behind the gas permeable material used in the HYPERStack vessels, gas diffusion results to meet the metabolic needs of cells, closed system cell growth protocols, and various harvesting methods.
Cellular Biology, Issue 45, cell culture, bioprocess, adherent, primary cell, HYPERStack, closed system, gas permeable, cell therapy, vaccine, scale up
Expired CO2 Measurement in Intubated or Spontaneously Breathing Patients from the Emergency Department
Institutions: Universit Catholique de Louvain Cliniques Universitaires Saint-Luc.
Carbon dioxide (CO2
) along with oxygen (O2
) share the role of being the most important gases in the human body. The measuring of expired CO2
at the mouth has solicited growing clinical interest among physicians in the emergency department for various indications: (1) surveillance et monitoring of the intubated patient; (2) verification of the correct positioning of an endotracheal tube; (3) monitoring of a patient in cardiac arrest; (4) achieving normocapnia in intubated head trauma patients; (5) monitoring ventilation during procedural sedation. The video allows physicians to familiarize themselves with the use of capnography and the text offers a review of the theory and principals involved. In particular, the importance of CO2
for the organism, the relevance of measuring expired CO2
, the differences between arterial and expired CO2
, the material used in capnography with their artifacts and traps, will be reviewed. Since the main reluctance in the use of expired CO2
measurement is due to lack of correct knowledge concerning the physiopathology of CO2
by the physician, we hope that this explanation and the video sequences accompanying will help resolve this limitation.
Medicine, Issue 47, capnography, CO2, emergency medicine, end-tidal CO2
Microvascular Decompression: Salient Surgical Principles and Technical Nuances
Institutions: Vanderbilt University Medical Center, Vanderbilt University Medical Center.
Trigeminal neuralgia is a disorder associated with severe episodes of lancinating pain in the distribution of the trigeminal nerve. Previous reports indicate that 80-90% of cases are related to compression of the trigeminal nerve by an adjacent vessel. The majority of patients with trigeminal neuralgia eventually require surgical management in order to achieve remission of symptoms. Surgical options for management include ablative procedures (e.g., radiosurgery, percutaneous radiofrequency lesioning, balloon compression, glycerol rhizolysis, etc.) and microvascular decompression. Ablative procedures fail to address the root cause of the disorder and are less effective at preventing recurrence of symptoms over the long term than microvascular decompression. However, microvascular decompression is inherently more invasive than ablative procedures and is associated with increased surgical risks. Previous studies have demonstrated a correlation between surgeon experience and patient outcome in microvascular decompression. In this series of 59 patients operated on by two neurosurgeons (JSN and PEK) since 2006, 93% of patients demonstrated substantial improvement in their trigeminal neuralgia following the procedure—with follow-up ranging from 6 weeks to 2 years. Moreover, 41 of 66 patients (approximately 64%) have been entirely pain-free following the operation.
In this publication, video format is utilized to review the microsurgical pathology of this disorder. Steps of the operative procedure are reviewed and salient principles and technical nuances useful in minimizing complications and maximizing efficacy are discussed.
Medicine, Issue 53, microvascular, decompression, trigeminal, neuralgia, operation, video
Magnetic Resonance Imaging Quantification of Pulmonary Perfusion using Calibrated Arterial Spin Labeling
Institutions: University of California San Diego - UCSD, University of California San Diego - UCSD, University of California San Diego - UCSD.
This demonstrates a MR imaging method to measure the spatial distribution of pulmonary blood flow in healthy subjects
during normoxia (inspired O2
, fraction (FI
) = 0.21) hypoxia (FI
= 0.125), and hyperoxia
= 1.00). In addition, the physiological responses of the subject are monitored in the MR scan environment. MR images
were obtained on a 1.5 T GE MRI scanner during a breath hold from a sagittal slice in the right lung at functional residual capacity. An arterial
spin labeling sequence (ASL-FAIRER) was used to measure the spatial distribution of pulmonary blood flow 1,2
and a multi-echo fast
gradient echo (mGRE) sequence 3
was used to quantify the regional proton (i.e. H2
O) density, allowing the quantification
of density-normalized perfusion for each voxel (milliliters blood per minute per gram lung tissue).
With a pneumatic switching valve and facemask equipped with a 2-way non-rebreathing valve, different oxygen concentrations
were introduced to the subject in the MR scanner through the inspired gas tubing. A metabolic cart collected expiratory gas via expiratory tubing. Mixed expiratory O2
concentrations, oxygen consumption, carbon dioxide production, respiratory exchange ratio,
respiratory frequency and tidal volume were measured. Heart rate and oxygen saturation were monitored using pulse-oximetry.
Data obtained from a normal subject showed that, as expected, heart rate was higher in hypoxia (60 bpm) than during normoxia (51) or hyperoxia (50) and the arterial oxygen saturation (SpO2
) was reduced during hypoxia to 86%. Mean ventilation was 8.31 L/min BTPS during hypoxia, 7.04 L/min during normoxia, and 6.64 L/min during hyperoxia. Tidal volume was 0.76 L during hypoxia, 0.69 L during normoxia, and 0.67 L during hyperoxia.
Representative quantified ASL data showed that the mean density normalized perfusion was 8.86 ml/min/g during hypoxia, 8.26 ml/min/g during normoxia and 8.46 ml/min/g during hyperoxia, respectively. In this subject, the relative dispersion4
, an index of global heterogeneity, was increased in hypoxia (1.07 during hypoxia, 0.85 during normoxia, and 0.87 during hyperoxia) while the fractal dimension (Ds), another index of heterogeneity reflecting vascular branching structure, was unchanged (1.24 during hypoxia, 1.26 during normoxia, and 1.26 during hyperoxia).
Overview. This protocol will demonstrate the acquisition of data to measure the distribution of pulmonary perfusion noninvasively under conditions of normoxia, hypoxia, and hyperoxia using a magnetic resonance imaging technique known as arterial spin labeling (ASL).
Rationale: Measurement of pulmonary blood flow and lung proton density using MR technique offers high spatial resolution images which can be quantified and the ability to perform repeated measurements under several different physiological conditions. In human studies, PET, SPECT, and CT are commonly used as the alternative techniques. However, these techniques involve exposure to ionizing radiation, and thus are not suitable for repeated measurements in human subjects.
Medicine, Issue 51, arterial spin labeling, lung proton density, functional lung imaging, hypoxic pulmonary vasoconstriction, oxygen consumption, ventilation, magnetic resonance imaging
Test Samples for Optimizing STORM Super-Resolution Microscopy
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Synthesis of Hypervalent Iodonium Alkynyl Triflates for the Application of Generating Cyanocarbenes
Institutions: University of North Carolina at Greensboro.
The procedures described in this article involve the synthesis and isolation of hypervalent iodonium alkynyl triflates (HIATs) and their subsequent reactions with azides to form cyanocarbene intermediates. The synthesis of hypervalent iodonium alkynyl triflates can be facile, but difficulties stem from their isolation and reactivity. In particular, the necessity to use filtration under inert atmosphere at -45 °C for some HIATs requires special care and equipment. Once isolated, the compounds can be stored and used in reactions with azides to form cyanocarbene intermediates.
The evidence for cyanocarbene generation is shown by visible extrusion of dinitrogen as well as the characterization of products that occur from O-H insertion, sulfoxide complexation, and cyclopropanation. A side reaction of the cyanocarbene formation is the generation of a vinylidene-carbene and the conditions to control this process are discussed. There is also potential to form a hypervalent iodonium alkenyl triflate and the means of isolation and control of its generation are provided. The O-H insertion reaction involves using a HIAT, sodium azide or tetrabutylammonium azide, and methanol as solvent/substrate. The sulfoxide complexation reaction uses a HIAT, sodium azide or tetrabutylammonium azide, and dimethyl sulfoxide as solvent. The cyclopropanations can be performed with or without the use of solvent. The azide source must be tetrabutylammonium azide and the substrate shown is styrene.
Chemistry, Issue 79, Iodine Compounds, Azides, Hydrocarbons, Cyclic, Nitriles, Onium Compounds, Explosive Agents, chemistry (general), chemistry of compounds, chemistry of elements, Organic Chemicals, azides, carbenes, cyanides, hypervalent compounds, synthetic methods, organic
Conducting Miller-Urey Experiments
Institutions: Georgia Institute of Technology, Tokyo Institute of Technology, Institute for Advanced Study, NASA Johnson Space Center, NASA Goddard Space Flight Center, University of California at San Diego.
In 1953, Stanley Miller reported the production of biomolecules from simple gaseous starting materials, using an apparatus constructed to simulate the primordial Earth's atmosphere-ocean system. Miller introduced 200 ml of water, 100 mmHg of H2
, 200 mmHg of CH4
, and 200 mmHg of NH3
into the apparatus, then subjected this mixture, under reflux, to an electric discharge for a week, while the water was simultaneously heated. The purpose of this manuscript is to provide the reader with a general experimental protocol that can be used to conduct a Miller-Urey type spark discharge experiment, using a simplified 3 L reaction flask. Since the experiment involves exposing inflammable gases to a high voltage electric discharge, it is worth highlighting important steps that reduce the risk of explosion. The general procedures described in this work can be extrapolated to design and conduct a wide variety of electric discharge experiments simulating primitive planetary environments.
Chemistry, Issue 83, Geosciences (General), Exobiology, Miller-Urey, Prebiotic chemistry, amino acids, spark discharge
Quantification of Heavy Metals and Other Inorganic Contaminants on the Productivity of Microalgae
Institutions: Utah State University.
Increasing demand for renewable fuels has researchers investigating the feasibility of alternative feedstocks, such as microalgae. Inherent advantages include high potential yield, use of non-arable land and integration with waste streams. The nutrient requirements of a large-scale microalgae production system will require the coupling of cultivation systems with industrial waste resources, such as carbon dioxide from flue gas and nutrients from wastewater. Inorganic contaminants present in these wastes can potentially lead to bioaccumulation in microalgal biomass negatively impact productivity and limiting end use. This study focuses on the experimental evaluation of the impact and the fate of 14 inorganic contaminants (As, Cd, Co, Cr, Cu, Hg, Mn, Ni, Pb, Sb, Se, Sn, V and Zn) on Nannochloropsis salina
growth. Microalgae were cultivated in photobioreactors illuminated at 984 µmol m-2
and maintained at pH 7 in a growth media polluted with inorganic contaminants at levels expected based on the composition found in commercial coal flue gas systems. Contaminants present in the biomass and the medium at the end of a 7 day growth period were analytically quantified through cold vapor atomic absorption spectrometry for Hg and through inductively coupled plasma mass spectrometry for As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sb, Se, Sn, V and Zn. Results show N. salina
is a sensitive strain to the multi-metal environment with a statistical decrease in biomass yieldwith the introduction of these contaminants. The techniques presented here are adequate for quantifying algal growth and determining the fate of inorganic contaminants.
Environmental Sciences, Issue 101, algae, heavy metals, Nannochloropsis salina, photobioreactor, flue gas, inductively coupled plasma mass spectrometry, ICPMS, cold vapor atomic absorption spectrometry, CVAAS