Many molecular biology assays depend in some way on the polymerase chain reaction (PCR) to amplify an initially dilute target DNA sample to a detectable concentration level. But the design of conventional PCR thermocycling hardware, predominantly based on massive metal heating blocks whose temperature is regulated by thermoelectric heaters, severely limits the achievable reaction speed1. Considerable electrical power is also required to repeatedly heat and cool the reagent mixture, limiting the ability to deploy these instruments in a portable format.
Thermal convection has emerged as a promising alternative thermocycling approach that has the potential to overcome these limitations2-9. Convective flows are an everyday occurrence in a diverse array of settings ranging from the Earth's atmosphere, oceans, and interior, to decorative and colorful lava lamps. Fluid motion is initiated in the same way in each case: a buoyancy driven instability arises when a confined volume of fluid is subjected to a spatial temperature gradient. These same phenomena offer an attractive way to perform PCR thermocycling. By applying a static temperature gradient across an appropriately designed reactor geometry, a continuous circulatory flow can be established that will repeatedly transport PCR reagents through temperature zones associated with the denaturing, annealing, and extension stages of the reaction (Figure 1). Thermocycling can therefore be actuated in a pseudo-isothermal manner by simply holding two opposing surfaces at fixed temperatures, completely eliminating the need to repeatedly heat and cool the instrument.
One of the main challenges facing design of convective thermocyclers is the need to precisely control the spatial velocity and temperature distributions within the reactor to ensure that the reagents sequentially occupy the correct temperature zones for a sufficient period of time10,11. Here we describe results of our efforts to probe the full 3-D velocity and temperature distributions in microscale convective thermocyclers12. Unexpectedly, we have discovered a subset of complex flow trajectories that are highly favorable for PCR due to a synergistic combination of (1) continuous exchange among flow paths that provides an enhanced opportunity for reagents to sample the full range of optimal temperature profiles, and (2) increased time spent within the extension temperature zone the rate limiting step of PCR. Extremely rapid DNA amplification times (under 10 min) are achievable in reactors designed to generate these flows.
26 Related JoVE Articles!
Investigating the Three-dimensional Flow Separation Induced by a Model Vocal Fold Polyp
Institutions: The George Washington University, Clarkson University.
The fluid-structure energy exchange process for normal speech has been studied extensively, but it is not well understood for pathological conditions. Polyps and nodules, which are geometric abnormalities that form on the medial surface of the vocal folds, can disrupt vocal fold dynamics and thus can have devastating consequences on a patient's ability to communicate. Our laboratory has reported particle image velocimetry (PIV) measurements, within an investigation of a model polyp located on the medial surface of an in vitro
driven vocal fold model, which show that such a geometric abnormality considerably disrupts the glottal jet behavior. This flow field adjustment is a likely reason for the severe degradation of the vocal quality in patients with polyps. A more complete understanding of the formation and propagation of vortical structures from a geometric protuberance, such as a vocal fold polyp, and the resulting influence on the aerodynamic loadings that drive the vocal fold dynamics, is necessary for advancing the treatment of this pathological condition. The present investigation concerns the three-dimensional flow separation induced by a wall-mounted prolate hemispheroid with a 2:1 aspect ratio in cross flow, i.e.
a model vocal fold polyp, using an oil-film visualization technique. Unsteady, three-dimensional flow separation and its impact of the wall pressure loading are examined using skin friction line visualization and wall pressure measurements.
Bioengineering, Issue 84, oil-flow visualization, vocal fold polyp, three-dimensional flow separation, aerodynamic pressure loadings
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
In Situ SIMS and IR Spectroscopy of Well-defined Surfaces Prepared by Soft Landing of Mass-selected Ions
Institutions: Pacific Northwest National Laboratory.
Soft landing of mass-selected ions onto surfaces is a powerful approach for the highly-controlled preparation of materials that are inaccessible using conventional synthesis techniques. Coupling soft landing with in situ
characterization using secondary ion mass spectrometry (SIMS) and infrared reflection absorption spectroscopy (IRRAS) enables analysis of well-defined surfaces under clean vacuum conditions. The capabilities of three soft-landing instruments constructed in our laboratory are illustrated for the representative system of surface-bound organometallics prepared by soft landing of mass-selected ruthenium tris(bipyridine) dications, [Ru(bpy)3
(bpy = bipyridine), onto carboxylic acid terminated self-assembled monolayer surfaces on gold (COOH-SAMs). In situ
time-of-flight (TOF)-SIMS provides insight into the reactivity of the soft-landed ions. In addition, the kinetics of charge reduction, neutralization and desorption occurring on the COOH-SAM both during and after ion soft landing are studied using in situ
Fourier transform ion cyclotron resonance (FT-ICR)-SIMS measurements. In situ
IRRAS experiments provide insight into how the structure of organic ligands surrounding metal centers is perturbed through immobilization of organometallic ions on COOH-SAM surfaces by soft landing. Collectively, the three instruments provide complementary information about the chemical composition, reactivity and structure of well-defined species supported on surfaces.
Chemistry, Issue 88, soft landing, mass selected ions, electrospray, secondary ion mass spectrometry, infrared spectroscopy, organometallic, catalysis
Nanomanipulation of Single RNA Molecules by Optical Tweezers
Institutions: University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York, University at Albany, State University of New York.
A large portion of the human genome is transcribed but not translated. In this post genomic era, regulatory functions of RNA have been shown to be increasingly important. As RNA function often depends on its ability to adopt alternative structures, it is difficult to predict RNA three-dimensional structures directly from sequence. Single-molecule approaches show potentials to solve the problem of RNA structural polymorphism by monitoring molecular structures one molecule at a time. This work presents a method to precisely manipulate the folding and structure of single RNA molecules using optical tweezers. First, methods to synthesize molecules suitable for single-molecule mechanical work are described. Next, various calibration procedures to ensure the proper operations of the optical tweezers are discussed. Next, various experiments are explained. To demonstrate the utility of the technique, results of mechanically unfolding RNA hairpins and a single RNA kissing complex are used as evidence. In these examples, the nanomanipulation technique was used to study folding of each structural domain, including secondary and tertiary, independently. Lastly, the limitations and future applications of the method are discussed.
Bioengineering, Issue 90, RNA folding, single-molecule, optical tweezers, nanomanipulation, RNA secondary structure, RNA tertiary structure
A Coupled Experiment-finite Element Modeling Methodology for Assessing High Strain Rate Mechanical Response of Soft Biomaterials
Institutions: Mississippi State University, Mississippi State University.
This study offers a combined experimental and finite element (FE) simulation approach for examining the mechanical behavior of soft biomaterials (e.g.
brain, liver, tendon, fat, etc.
) when exposed to high strain rates. This study utilized a Split-Hopkinson Pressure Bar (SHPB) to generate strain rates of 100-1,500 sec-1
. The SHPB employed a striker bar consisting of a viscoelastic material (polycarbonate). A sample of the biomaterial was obtained shortly postmortem and prepared for SHPB testing. The specimen was interposed between the incident and transmitted bars, and the pneumatic components of the SHPB were activated to drive the striker bar toward the incident bar. The resulting impact generated a compressive stress wave (i.e.
incident wave) that traveled through the incident bar. When the compressive stress wave reached the end of the incident bar, a portion continued forward through the sample and transmitted bar (i.e.
transmitted wave) while another portion reversed through the incident bar as a tensile wave (i.e.
reflected wave). These waves were measured using strain gages mounted on the incident and transmitted bars. The true stress-strain behavior of the sample was determined from equations based on wave propagation and dynamic force equilibrium. The experimental stress-strain response was three dimensional in nature because the specimen bulged. As such, the hydrostatic stress (first invariant) was used to generate the stress-strain response. In order to extract the uniaxial (one-dimensional) mechanical response of the tissue, an iterative coupled optimization was performed using experimental results and Finite Element Analysis (FEA), which contained an Internal State Variable (ISV) material model used for the tissue. The ISV material model used in the FE simulations of the experimental setup was iteratively calibrated (i.e.
optimized) to the experimental data such that the experiment and FEA strain gage values and first invariant of stresses were in good agreement.
Bioengineering, Issue 99, Split-Hopkinson Pressure Bar, High Strain Rate, Finite Element Modeling, Soft Biomaterials, Dynamic Experiments, Internal State Variable Modeling, Brain, Liver, Tendon, Fat
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
The Preparation of Electrohydrodynamic Bridges from Polar Dielectric Liquids
Institutions: Wetsus - Centre of Excellence for Sustainable Water Technology, IRCAM GmbH, Graz University of Technology.
Horizontal and vertical liquid bridges are simple and powerful tools for exploring the interaction of high intensity electric fields (8-20 kV/cm) and polar dielectric liquids. These bridges are unique from capillary bridges in that they exhibit extensibility beyond a few millimeters, have complex bi-directional mass transfer patterns, and emit non-Planck infrared radiation. A number of common solvents can form such bridges as well as low conductivity solutions and colloidal suspensions. The macroscopic behavior is governed by electrohydrodynamics and provides a means of studying fluid flow phenomena without the presence of rigid walls. Prior to the onset of a liquid bridge several important phenomena can be observed including advancing meniscus height (electrowetting), bulk fluid circulation (the Sumoto effect), and the ejection of charged droplets (electrospray). The interaction between surface, polarization, and displacement forces can be directly examined by varying applied voltage and bridge length. The electric field, assisted by gravity, stabilizes the liquid bridge against Rayleigh-Plateau instabilities. Construction of basic apparatus for both vertical and horizontal orientation along with operational examples, including thermographic images, for three liquids (e.g.
, water, DMSO, and glycerol) is presented.
Physics, Issue 91, floating water bridge, polar dielectric liquids, liquid bridge, electrohydrodynamics, thermography, dielectrophoresis, electrowetting, Sumoto effect, Armstrong effect
Contrast Imaging in Mouse Embryos Using High-frequency Ultrasound
Institutions: University of Toronto, Sunnybrook Research Institute, Mount Sinai Hospital, Toronto.
Ultrasound contrast-enhanced imaging can convey essential quantitative information regarding tissue vascularity and perfusion and, in targeted applications, facilitate the detection and measure of vascular biomarkers at the molecular level. Within the mouse embryo, this noninvasive technique may be used to uncover basic mechanisms underlying vascular development in the early mouse circulatory system and in genetic models of cardiovascular disease. The mouse embryo also presents as an excellent model for studying the adhesion of microbubbles to angiogenic targets (including vascular endothelial growth factor receptor 2 (VEGFR2) or αv
) and for assessing the quantitative nature of molecular ultrasound. We therefore developed a method to introduce ultrasound contrast agents into the vasculature of living, isolated embryos. This allows freedom in terms of injection control and positioning, reproducibility of the imaging plane without obstruction and motion, and simplified image analysis and quantification. Late gestational stage (embryonic day (E)16.6 and E17.5) murine embryos were isolated from the uterus, gently exteriorized from the yolk sac and microbubble contrast agents were injected into veins accessible on the chorionic surface of the placental disc. Nonlinear contrast ultrasound imaging was then employed to collect a number of basic perfusion parameters (peak enhancement, wash-in rate and time to peak) and quantify targeted microbubble binding in an endoglin mouse model. We show the successful circulation of microbubbles within living embryos and the utility of this approach in characterizing embryonic vasculature and microbubble behavior.
Developmental Biology, Issue 97, Micro-ultrasound, Molecular imaging, Mouse embryo, Microbubble, Ultrasound contrast agent, Perfusion
Methods for Characterizing the Co-development of Biofilm and Habitat Heterogeneity
Institutions: Northwestern University, Northwestern University, Northwestern University.
Biofilms are surface-attached microbial communities that have complex structures and produce significant spatial heterogeneities. Biofilm development is strongly regulated by the surrounding flow and nutritional environment. Biofilm growth also increases the heterogeneity of the local microenvironment by generating complex flow fields and solute transport patterns. To investigate the development of heterogeneity in biofilms and interactions between biofilms and their local micro-habitat, we grew mono-species biofilms of Pseudomonas aeruginosa
and dual-species biofilms of P. aeruginosa
and Escherichia coli
under nutritional gradients in a microfluidic flow cell. We provide detailed protocols for creating nutrient gradients within the flow cell and for growing and visualizing biofilm development under these conditions. We also present protocols for a series of optical methods to quantify spatial patterns in biofilm structure, flow distributions over biofilms, and mass transport around and within biofilm colonies. These methods support comprehensive investigations of the co-development of biofilm and habitat heterogeneity.
Bioengineering, Issue 97, microfluidic flow cell, chemical gradient, biofilm development, particle tracking, flow characterization, fluorescent tracer, solute transport
Ultrasound Velocity Measurement in a Liquid Metal Electrode
Institutions: University of Rochester.
A growing number of electrochemical technologies depend on fluid flow, and often that fluid is opaque. Measuring the flow of an opaque fluid is inherently more difficult than measuring the flow of a transparent fluid, since optical methods are not applicable. Ultrasound can be used to measure the velocity of an opaque fluid, not only at isolated points, but at hundreds or thousands of points arrayed along lines, with good temporal resolution. When applied to a liquid metal electrode, ultrasound velocimetry involves additional challenges: high temperature, chemical activity, and electrical conductivity. Here we describe the experimental apparatus and methods that overcome these challenges and allow the measurement of flow in a liquid metal electrode, as it conducts current, at operating temperature. Temperature is regulated within ±2 °C using a Proportional-Integral-Derivative (PID) controller that powers a custom-built furnace. Chemical activity is managed by choosing vessel materials carefully and enclosing the experimental setup in an argon-filled glovebox. Finally, unintended electrical paths are carefully prevented. An automated system logs control settings and experimental measurements, using hardware trigger signals to synchronize devices. This apparatus and these methods can produce measurements that are impossible with other techniques, and allow optimization and control of electrochemical technologies like liquid metal batteries.
Engineering, Issue 102, batteries, energy storage, magnetohydrodynamics, fluid dynamics, ultrasound velocimetry, electrochemistry
Thermal Measurement Techniques in Analytical Microfluidic Devices
Institutions: Marquette University.
Thermal measurement techniques have been used for many applications such as thermal characterization of materials and chemical reaction detection. Micromachining techniques allow reduction of the thermal mass of fabricated structures and introduce the possibility to perform high sensitivity thermal measurements in the micro-scale and nano-scale devices. Combining thermal measurement techniques with microfluidic devices allows performing different analytical measurements with low sample consumption and reduced measurement time by integrating the miniaturized system on a single chip. The procedures of thermal measurement techniques for particle detection, material characterization, and chemical detection are introduced in this paper.
Engineering, Issue 100, Thermal Particle Detection, Thermal Wave Analysis, Heat Penetration Time, Thermal Time Constant, Enthalpy Assay, Thermal Conductivity and Specific Heat
Evaluating Plasmonic Transport in Current-carrying Silver Nanowires
Institutions: Université de Bourgogne, University of Science and Technology of China, CEMES, CNRS-UPR 8011.
Plasmonics is an emerging technology capable of simultaneously transporting a plasmonic signal and an electronic signal on the same information support1,2,3
. In this context, metal nanowires are especially desirable for realizing dense routing networks4
. A prerequisite to operate such shared nanowire-based platform relies on our ability to electrically contact individual metal nanowires and efficiently excite surface plasmon polaritons5
in this information support. In this article, we describe a protocol to bring electrical terminals to chemically-synthesized silver nanowires6
randomly distributed on a glass substrate7
. The positions of the nanowire ends with respect to predefined landmarks are precisely located using standard optical transmission microscopy before encapsulation in an electron-sensitive resist. Trenches representing the electrode layout are subsequently designed by electron-beam lithography. Metal electrodes are then fabricated by thermally evaporating a Cr/Au layer followed by a chemical lift-off. The contacted silver nanowires are finally transferred to a leakage radiation microscope for surface plasmon excitation and characterization8,9
. Surface plasmons are launched in the nanowires by focusing a near infrared laser beam on a diffraction-limited spot overlapping one nanowire extremity5,9
. For sufficiently large nanowires, the surface plasmon mode leaks into the glass substrate9,10
. This leakage radiation is readily detected, imaged, and analyzed in the different conjugate planes in leakage radiation microscopy9,11
. The electrical terminals do not affect the plasmon propagation. However, a current-induced morphological deterioration of the nanowire drastically degrades the flow of surface plasmons. The combination of surface plasmon leakage radiation microscopy with a simultaneous analysis of the nanowire electrical transport characteristics reveals the intrinsic limitations of such plasmonic circuitry.
Physics, Issue 82, light transmission, optical waveguides, photonics, plasma oscillations, plasma waves, electron motion in conductors, nanofabrication, Information Transport, plasmonics, Silver Nanowires, Leakage radiation microscopy, Electromigration
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Increasing cDNA Yields from Single-cell Quantities of mRNA in Standard Laboratory Reverse Transcriptase Reactions using Acoustic Microstreaming
Institutions: University of Melbourne, CSIRO Materials Science and Engineering, Faculty of Engineering and Industrial Sciences.
Correlating gene expression with cell behavior is ideally done at the single-cell level. However, this is not easily achieved because the small amount of labile mRNA present in a single cell (1-5% of 1-50pg total RNA, or 0.01-2.5pg mRNA, per cell 1
) mostly degrades before it can be reverse transcribed into a stable cDNA copy. For example, using standard laboratory reagents and hardware, only a small number of genes can be qualitatively assessed per cell 2
. One way to increase the efficiency of standard laboratory reverse transcriptase (RT) reactions (i.e. standard reagents in microliter volumes) comprising single-cell amounts of mRNA would be to more rapidly mix the reagents so the mRNA can be converted to cDNA before it degrades. However this is not trivial because at microliter scales liquid flow is laminar, i.e. currently available methods of mixing (i.e. shaking, vortexing and trituration) fail to produce sufficient chaotic motion to effectively mix reagents. To solve this problem, micro-scale mixing techniques have to be used 3,4
. A number of microfluidic-based mixing technologies have been developed which successfully increase RT reaction yields 5-8
. However, microfluidics technologies require specialized hardware that is relatively expensive and not yet widely available. A cheaper, more convenient solution is desirable. The main objective of this study is to demonstrate how application of a novel "micromixing" technique to standard laboratory RT reactions comprising single-cell quantities of mRNA significantly increases their cDNA yields. We find cDNA yields increase by approximately 10-100-fold, which enables: (1) greater numbers of genes to be analyzed per cell; (2) more quantitative analysis of gene expression; and (3) better detection of low-abundance genes in single cells. The micromixing is based on acoustic microstreaming 9-12
, a phenomenon where sound waves propagating around a small obstacle create a mean flow near the obstacle. We have developed an acoustic microstreaming-based device ("micromixer") with a key simplification; acoustic microstreaming can be achieved at audio frequencies by ensuring the system has a liquid-air interface with a small radius of curvature 13
. The meniscus of a microliter volume of solution in a tube provides an appropriately small radius of curvature. The use of audio frequencies means that the hardware can be inexpensive and versatile 13
, and nucleic acids and other biochemical reagents are not damaged like they can be with standard laboratory sonicators.
Bioengineering, Issue 53, neuroscience, brain, cells, reverse transcription, qPCR, gene expression, acoustic microstreaming, micromixer, microfluidics
Quantifying Agonist Activity at G Protein-coupled Receptors
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors.
Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb
) is much greater than that for the inactive state (Ka
). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs
), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε
) (Figure 2).
Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs
and relative efficacy of an agonist 1,2
. In this report, we show how to modify this analysis to estimate the agonist Kb
value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb
in absolute units of M-1
Our method of analyzing agonist concentration-response curves 3,4
consists of global nonlinear regression using the operational model 5
. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs
and a parameter proportional to efficacy (τ
). The estimate of τKobs
of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6
. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys
). In this case, the Kb
value of an agonist is equivalent to τKobs/τsys 3
Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
Application of MassSQUIRM for Quantitative Measurements of Lysine Demethylase Activity
Institutions: University of Arkansas for Medical Sciences .
Recently, epigenetic regulators have been discovered as key players in many different diseases 1-3
. As a result, these enzymes are prime targets for small molecule studies and drug development 4
. Many epigenetic regulators have only recently been discovered and are still in the process of being classified. Among these enzymes are lysine demethylases which remove methyl groups from lysines on histones and other proteins. Due to the novel nature of this class of enzymes, few assays have been developed to study their activity. This has been a road block to both the classification and high throughput study of histone demethylases. Currently, very few demethylase assays exist. Those that do exist tend to be qualitative in nature and cannot simultaneously discern between the different lysine methylation states (un-, mono-, di- and tri-). Mass spectrometry is commonly used to determine demethylase activity but current mass spectrometric assays do not address whether differentially methylated peptides ionize differently. Differential ionization of methylated peptides makes comparing methylation states difficult and certainly not quantitative (Figure 1A). Thus available assays are not optimized for the comprehensive analysis of demethylase activity.
Here we describe a method called MassSQUIRM (mass spectrometric quantitation using isotopic reductive methylation) that is based on reductive methylation of amine groups with deuterated formaldehyde to force all lysines to be di-methylated, thus making them essentially the same chemical species and therefore ionize the same (Figure 1B). The only chemical difference following the reductive methylation is hydrogen and deuterium, which does not affect MALDI ionization efficiencies. The MassSQUIRM assay is specific for demethylase reaction products with un-, mono- or di-methylated lysines. The assay is also applicable to lysine methyltransferases giving the same reaction products. Here, we use a combination of reductive methylation chemistry and MALDI mass spectrometry to measure the activity of LSD1, a lysine demethylase capable of removing di- and mono-methyl groups, on a synthetic peptide substrate 5
. This assay is simple and easily amenable to any lab with access to a MALDI mass spectrometer in lab or through a proteomics facility. The assay has ~8-fold dynamic range and is readily scalable to plate format 5
Molecular Biology, Issue 61, LSD1, lysine demethylase, mass spectrometry, reductive methylation, demethylase quantification
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Echo Particle Image Velocimetry
Institutions: University of New Hampshire.
The transport of mass, momentum, and energy in fluid flows is ultimately determined by spatiotemporal distributions of the fluid velocity field.1
Consequently, a prerequisite for understanding, predicting, and controlling fluid flows is the capability to measure the velocity field with adequate spatial and temporal resolution.2
For velocity measurements in optically opaque fluids or through optically opaque geometries, echo particle image velocimetry (EPIV) is an attractive diagnostic technique to generate "instantaneous" two-dimensional fields of velocity.3,4,5,6
In this paper, the operating protocol for an EPIV system built by integrating a commercial medical ultrasound machine7
with a PC running commercial particle image velocimetry (PIV) software8
is described, and validation measurements in Hagen-Poiseuille (i.e.
, laminar pipe) flow are reported.
For the EPIV measurements, a phased array probe connected to the medical ultrasound machine is used to generate a two-dimensional ultrasound image by pulsing the piezoelectric probe elements at different times. Each probe element transmits an ultrasound pulse into the fluid, and tracer particles in the fluid (either naturally occurring or seeded) reflect ultrasound echoes back to the probe where they are recorded. The amplitude of the reflected ultrasound waves and their time delay relative to transmission are used to create what is known as B-mode (brightness mode) two-dimensional ultrasound images. Specifically, the time delay is used to determine the position of the scatterer in the fluid and the amplitude is used to assign intensity to the scatterer. The time required to obtain a single B-mode image, t
, is determined by the time it take to pulse all the elements of the phased array probe. For acquiring multiple B-mode images, the frame rate of the system in frames per second (fps) = 1/δt
. (See 9 for a review of ultrasound imaging.)
For a typical EPIV experiment, the frame rate is between 20-60 fps, depending on flow conditions, and 100-1000 B-mode images of the spatial distribution of the tracer particles in the flow are acquired. Once acquired, the B-mode ultrasound images are transmitted via an ethernet connection to the PC running the PIV commercial software. Using the PIV software, tracer particle displacement fields, D(x,y)
[pixels], (where x and y denote horizontal and vertical spatial position in the ultrasound image, respectively) are acquired by applying cross correlation algorithms to successive ultrasound B-mode images.10
The velocity fields, u(x,y)
[m/s], are determined from the displacements fields, knowing the time step between image pairs, ΔT
[s], and the image magnification, M
. The time step between images ΔT
= 1/fps + D(x,y)/B
, where B
[pixels/s] is the time it takes for the ultrasound probe to sweep across the image width. In the present study, M = 77[μm/pixel], fps
= 49.5[1/s], and B
= 25,047[pixels/s]. Once acquired, the velocity fields can be analyzed to compute flow quantities of interest.
Mechanical Engineering, Issue 70, Physics, Engineering, Physical Sciences, Ultrasound, cross correlation, velocimetry, opaque fluids, particle, flow, fluid, EPIV
Simulation, Fabrication and Characterization of THz Metamaterial Absorbers
Institutions: University of Glasgow.
Metamaterials (MM), artificial materials engineered to have properties that may not be found in nature, have been widely explored since the first theoretical1
and experimental demonstration2
of their unique properties. MMs can provide a highly controllable electromagnetic response, and to date have been demonstrated in every technologically relevant spectral range including the optical3
, near IR4
, mid IR5
bands. Applications include perfect lenses10
, invisibility cloaks13
. We have recently developed single band16
, dual band17
THz metamaterial absorber devices capable of greater than 80% absorption at the resonance peak. The concept of a MM absorber is especially important at THz frequencies where it is difficult to find strong frequency selective THz absorbers19
. In our MM absorber the THz radiation is absorbed in a thickness of ~ λ/20, overcoming the thickness limitation of traditional quarter wavelength absorbers. MM absorbers naturally lend themselves to THz detection applications, such as thermal sensors, and if integrated with suitable THz sources (e.g.
QCLs), could lead to compact, highly sensitive, low cost, real time THz imaging systems.
Materials Science, Issue 70, Physics, Engineering, Metamaterial, terahertz, sensing, fabrication, clean room, simulation, FTIR, spectroscopy
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Fabrication And Characterization Of Photonic Crystal Slow Light Waveguides And Cavities
Institutions: University of St Andrews.
Slow light has been one of the hot topics in the photonics community in the past decade, generating great interest both from a fundamental point of view and for its considerable potential for practical applications. Slow light photonic crystal waveguides, in particular, have played a major part and have been successfully employed for delaying optical signals1-4
and the enhancement of both linear5-7
and nonlinear devices.8-11
Photonic crystal cavities achieve similar effects to that of slow light waveguides, but over a reduced band-width. These cavities offer high Q-factor/volume ratio, for the realization of optically12
pumped ultra-low threshold lasers and the enhancement of nonlinear effects.14-16
Furthermore, passive filters17
have been demonstrated, exhibiting ultra-narrow line-width, high free-spectral range and record values of low energy consumption.
To attain these exciting results, a robust repeatable fabrication protocol must be developed. In this paper we take an in-depth look at our fabrication protocol which employs electron-beam lithography for the definition of photonic crystal patterns and uses wet and dry etching techniques. Our optimised fabrication recipe results in photonic crystals that do not suffer from vertical asymmetry and exhibit very good edge-wall roughness. We discuss the results of varying the etching parameters and the detrimental effects that they can have on a device, leading to a diagnostic route that can be taken to identify and eliminate similar issues.
The key to evaluating slow light waveguides is the passive characterization of transmission and group index spectra. Various methods have been reported, most notably resolving the Fabry-Perot fringes of the transmission spectrum20-21
and interferometric techniques.22-25
Here, we describe a direct, broadband measurement technique combining spectral interferometry with Fourier transform analysis.26
Our method stands out for its simplicity and power, as we can characterise a bare photonic crystal with access waveguides, without need for on-chip interference components, and the setup only consists of a Mach-Zehnder interferometer, with no need for moving parts and delay scans.
When characterising photonic crystal cavities, techniques involving internal sources21
or external waveguides directly coupled to the cavity27
impact on the performance of the cavity itself, thereby distorting the measurement. Here, we describe a novel and non-intrusive technique that makes use of a cross-polarised probe beam and is known as resonant scattering (RS), where the probe is coupled out-of plane into the cavity through an objective. The technique was first demonstrated by McCutcheon et al.28
and further developed by Galli et al.29
Physics, Issue 69, Optics and Photonics, Astronomy, light scattering, light transmission, optical waveguides, photonics, photonic crystals, Slow-light, Cavities, Waveguides, Silicon, SOI, Fabrication, Characterization
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Synthesis and Microdiffraction at Extreme Pressures and Temperatures
Institutions: University of Nevada, Las Vegas, University of Chicago, Carnegie Institution of Washington.
High pressure compounds and polymorphs are investigated for a broad range of purposes such as determine structures and processes of deep planetary interiors, design materials with novel properties, understand the mechanical behavior of materials exposed to very high stresses as in explosions or impacts. Synthesis and structural analysis of materials at extreme conditions of pressure and temperature entails remarkable technical challenges. In the laser heated diamond anvil cell (LH-DAC), very high pressure is generated between the tips of two opposing diamond anvils forced against each other; focused infrared laser beams, shined through the diamonds, allow to reach very high temperatures on samples absorbing the laser radiation. When the LH-DAC is installed in a synchrotron beamline that provides extremely brilliant x-ray radiation, the structure of materials under extreme conditions can be probed in situ.
LH-DAC samples, although very small, can show highly variable grain size, phase and chemical composition. In order to obtain the high resolution structural analysis and the most comprehensive characterization of a sample, we collect diffraction data in 2D grids and combine powder, single crystal and multigrain diffraction techniques. Representative results obtained in the synthesis of a new iron oxide, Fe4
will be shown.
Physics, Issue 80, x-ray diffraction, geochemistry, geophysics, solid-state physics, high-pressure, high-temperature, Diamond anvil cell, micro-diffraction, novel materials, iron oxides, mantle mineralogy
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g.
by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5
. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6
, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1
. Originally published by Naal et al.1
, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here.
Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11
, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2
. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280
= 4,200 L/M/cm)12
. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
Analyzing Mixing Inhomogeneity in a Microfluidic Device by Microscale Schlieren Technique
Institutions: National Taiwan University, National Taiwan University of Science and Technology.
In this paper, we introduce the use of microscale schlieren technique to measure mixing inhomogeneity in a microfluidic device. The microscale schlieren system is constructed from a Hoffman modulation contrast microscope, which provides easy access to the rear focal plane of the objective lens, by removing the slit plate and replacing the modulator with a knife-edge. The working principle of microscale schlieren technique relies on detecting light deflection caused by variation of refractive index1-3
. The deflected light either escapes or is obstructed by the knife-edge to produce a bright or a dark band, respectively. If the refractive index of the mixture varies linearly with its composition, the local change in light intensity in the image plane is proportional to the concentration gradient normal to the optical axis. The micro-schlieren image gives a two-dimensional projection of the disturbed light produced by three-dimensional inhomogeneity.
To accomplish quantitative analysis, we describe a calibration procedure that mixes two fluids in a T-microchannel. We carry out a numerical simulation to obtain the concentration gradient in the T-microchannel that correlates closely with the corresponding micro-schlieren image. By comparison, a relationship between the grayscale readouts of the micro-schlieren image and the concentration gradients presented in a microfluidic device is established. Using this relationship, we are able to analyze the mixing inhomogeneity from associate micro-schlieren image and demonstrate the capability of microscale schlieren technique with measurements in a microfluidic oscillator4
. For optically transparent fluids, microscale schlieren technique is an attractive diagnostic tool to provide instantaneous full-field information that retains the three-dimensional features of the mixing process.
Bioengineering, Issue 100, Physics, schlieren optics, microfluidics, image analysis, flow visualization, full-field measurement, mixing