JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Order reduction of the chemical master equation via balanced realisation.
PLoS ONE
PUBLISHED: 08-14-2014
We consider a Markov process in continuous time with a finite number of discrete states. The time-dependent probabilities of being in any state of the Markov chain are governed by a set of ordinary differential equations, whose dimension might be large even for trivial systems. Here, we derive a reduced ODE set that accurately approximates the probabilities of subspaces of interest with a known error bound. Our methodology is based on model reduction by balanced truncation and can be considerably more computationally efficient than solving the chemical master equation directly. We show the applicability of our method by analysing stochastic chemical reactions. First, we obtain a reduced order model for the infinitesimal generator of a Markov chain that models a reversible, monomolecular reaction. Later, we obtain a reduced order model for a catalytic conversion of substrate to a product (a so-called Michaelis-Menten mechanism), and compare its dynamics with a rapid equilibrium approximation method. For this example, we highlight the savings on the computational load obtained by means of the reduced-order model. Furthermore, we revisit the substrate catalytic conversion by obtaining a lower-order model that approximates the probability of having predefined ranges of product molecules. In such an example, we obtain an approximation of the output of a model with 5151 states by a reduced model with 16 states. Finally, we obtain a reduced-order model of the Brusselator.
Authors: Tayyab Suratwala, Rusty Steele, Michael Feit, Rebecca Dylla-Spears, Richard Desjardin, Dan Mason, Lana Wong, Paul Geraghty, Phil Miller, Nan Shen.
Published: 12-01-2014
ABSTRACT
Convergent Polishing is a novel polishing system and method for finishing flat and spherical glass optics in which a workpiece, independent of its initial shape (i.e., surface figure), will converge to final surface figure with excellent surface quality under a fixed, unchanging set of polishing parameters in a single polishing iteration. In contrast, conventional full aperture polishing methods require multiple, often long, iterative cycles involving polishing, metrology and process changes to achieve the desired surface figure. The Convergent Polishing process is based on the concept of workpiece-lap height mismatch resulting in pressure differential that decreases with removal and results in the workpiece converging to the shape of the lap. The successful implementation of the Convergent Polishing process is a result of the combination of a number of technologies to remove all sources of non-uniform spatial material removal (except for workpiece-lap mismatch) for surface figure convergence and to reduce the number of rogue particles in the system for low scratch densities and low roughness. The Convergent Polishing process has been demonstrated for the fabrication of both flats and spheres of various shapes, sizes, and aspect ratios on various glass materials. The practical impact is that high quality optical components can be fabricated more rapidly, more repeatedly, with less metrology, and with less labor, resulting in lower unit costs. In this study, the Convergent Polishing protocol is specifically described for fabricating 26.5 cm square fused silica flats from a fine ground surface to a polished ~λ/2 surface figure after polishing 4 hr per surface on a 81 cm diameter polisher.
23 Related JoVE Articles!
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Steady-state, Pre-steady-state, and Single-turnover Kinetic Measurement for DNA Glycosylase Activity
Authors: Akira Sassa, William A. Beard, David D. Shock, Samuel H. Wilson.
Institutions: NIEHS, National Institutes of Health.
Human 8-oxoguanine DNA glycosylase (OGG1) excises the mutagenic oxidative DNA lesion 8-oxo-7,8-dihydroguanine (8-oxoG) from DNA. Kinetic characterization of OGG1 is undertaken to measure the rates of 8-oxoG excision and product release. When the OGG1 concentration is lower than substrate DNA, time courses of product formation are biphasic; a rapid exponential phase (i.e. burst) of product formation is followed by a linear steady-state phase. The initial burst of product formation corresponds to the concentration of enzyme properly engaged on the substrate, and the burst amplitude depends on the concentration of enzyme. The first-order rate constant of the burst corresponds to the intrinsic rate of 8-oxoG excision and the slower steady-state rate measures the rate of product release (product DNA dissociation rate constant, koff). Here, we describe steady-state, pre-steady-state, and single-turnover approaches to isolate and measure specific steps during OGG1 catalytic cycling. A fluorescent labeled lesion-containing oligonucleotide and purified OGG1 are used to facilitate precise kinetic measurements. Since low enzyme concentrations are used to make steady-state measurements, manual mixing of reagents and quenching of the reaction can be performed to ascertain the steady-state rate (koff). Additionally, extrapolation of the steady-state rate to a point on the ordinate at zero time indicates that a burst of product formation occurred during the first turnover (i.e. y-intercept is positive). The first-order rate constant of the exponential burst phase can be measured using a rapid mixing and quenching technique that examines the amount of product formed at short time intervals (<1 sec) before the steady-state phase and corresponds to the rate of 8-oxoG excision (i.e. chemistry). The chemical step can also be measured using a single-turnover approach where catalytic cycling is prevented by saturating substrate DNA with enzyme (E>S). These approaches can measure elementary rate constants that influence the efficiency of removal of a DNA lesion.
Chemistry, Issue 78, Biochemistry, Genetics, Molecular Biology, Microbiology, Structural Biology, Chemical Biology, Eukaryota, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Enzymes and Coenzymes, Life Sciences (General), enzymology, rapid quench-flow, active site titration, steady-state, pre-steady-state, single-turnover, kinetics, base excision repair, DNA glycosylase, 8-oxo-7,8-dihydroguanine, 8-oxoG, sequencing
50695
Play Button
Experimental Measurement of Settling Velocity of Spherical Particles in Unconfined and Confined Surfactant-based Shear Thinning Viscoelastic Fluids
Authors: Sahil Malhotra, Mukul M. Sharma.
Institutions: The University of Texas at Austin.
An experimental study is performed to measure the terminal settling velocities of spherical particles in surfactant based shear thinning viscoelastic (VES) fluids. The measurements are made for particles settling in unbounded fluids and fluids between parallel walls. VES fluids over a wide range of rheological properties are prepared and rheologically characterized. The rheological characterization involves steady shear-viscosity and dynamic oscillatory-shear measurements to quantify the viscous and elastic properties respectively. The settling velocities under unbounded conditions are measured in beakers having diameters at least 25x the diameter of particles. For measuring settling velocities between parallel walls, two experimental cells with different wall spacing are constructed. Spherical particles of varying sizes are gently dropped in the fluids and allowed to settle. The process is recorded with a high resolution video camera and the trajectory of the particle is recorded using image analysis software. Terminal settling velocities are calculated from the data. The impact of elasticity on settling velocity in unbounded fluids is quantified by comparing the experimental settling velocity to the settling velocity calculated by the inelastic drag predictions of Renaud et al.1 Results show that elasticity of fluids can increase or decrease the settling velocity. The magnitude of reduction/increase is a function of the rheological properties of the fluids and properties of particles. Confining walls are observed to cause a retardation effect on settling and the retardation is measured in terms of wall factors.
Physics, Issue 83, chemical engineering, settling velocity, Reynolds number, shear thinning, wall retardation
50749
Play Button
Microwave-assisted Functionalization of Poly(ethylene glycol) and On-resin Peptides for Use in Chain Polymerizations and Hydrogel Formation
Authors: Amy H. Van Hove, Brandon D. Wilson, Danielle S. W. Benoit.
Institutions: University of Rochester, University of Rochester, University of Rochester Medical Center.
One of the main benefits to using poly(ethylene glycol) (PEG) macromers in hydrogel formation is synthetic versatility. The ability to draw from a large variety of PEG molecular weights and configurations (arm number, arm length, and branching pattern) affords researchers tight control over resulting hydrogel structures and properties, including Young’s modulus and mesh size. This video will illustrate a rapid, efficient, solvent-free, microwave-assisted method to methacrylate PEG precursors into poly(ethylene glycol) dimethacrylate (PEGDM). This synthetic method provides much-needed starting materials for applications in drug delivery and regenerative medicine. The demonstrated method is superior to traditional methacrylation methods as it is significantly faster and simpler, as well as more economical and environmentally friendly, using smaller amounts of reagents and solvents. We will also demonstrate an adaptation of this technique for on-resin methacrylamide functionalization of peptides. This on-resin method allows the N-terminus of peptides to be functionalized with methacrylamide groups prior to deprotection and cleavage from resin. This allows for selective addition of methacrylamide groups to the N-termini of the peptides while amino acids with reactive side groups (e.g. primary amine of lysine, primary alcohol of serine, secondary alcohols of threonine, and phenol of tyrosine) remain protected, preventing functionalization at multiple sites. This article will detail common analytical methods (proton Nuclear Magnetic Resonance spectroscopy (;H-NMR) and Matrix Assisted Laser Desorption Ionization Time of Flight mass spectrometry (MALDI-ToF)) to assess the efficiency of the functionalizations. Common pitfalls and suggested troubleshooting methods will be addressed, as will modifications of the technique which can be used to further tune macromer functionality and resulting hydrogel physical and chemical properties. Use of synthesized products for the formation of hydrogels for drug delivery and cell-material interaction studies will be demonstrated, with particular attention paid to modifying hydrogel composition to affect mesh size, controlling hydrogel stiffness and drug release.
Chemistry, Issue 80, Poly(ethylene glycol), peptides, polymerization, polymers, methacrylation, peptide functionalization, 1H-NMR, MALDI-ToF, hydrogels, macromer synthesis
50890
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Mizoroki-Heck Cross-coupling Reactions Catalyzed by Dichloro{bis[1,1',1''-(phosphinetriyl)tripiperidine]}palladium Under Mild Reaction Conditions
Authors: Miriam Oberholzer, Christian M. Frech.
Institutions: University of Zürich, Zürich University of Applied Sciences.
Dichloro-bis(aminophosphine) complexes of palladium with the general formula of [(P{(NC5H10)3-n(C6H11)n})2Pd(Cl)2] (where n = 0-2), belong to a new family of easy accessible, very cheap, and air stable, but highly active and universally applicable C-C cross-coupling catalysts with an excellent functional group tolerance. Dichloro{bis[1,1',1''-(phosphinetriyl)tripiperidine]}palladium [(P(NC5H10)3)2Pd(Cl)2] (1), the least stable complex within this series towards protons; e.g. in the form of water, allows an eased nanoparticle formation and hence, proved to be the most active Heck catalyst within this series at 100 °C and is a very rare example of an effective and versatile catalyst system that efficiently operates under mild reaction conditions. Rapid and complete catalyst degradation under work-up conditions into phosphonates, piperidinium salts and other, palladium-containing decomposition products assure an easy separation of the coupling products from catalyst and ligands. The facile, cheap, and rapid synthesis of 1,1',1"-(phosphinetriyl)tripiperidine and 1 respectively, the simple and convenient use as well as its excellent catalytic performance in the Heck reaction at 100 °C make 1 to one of the most attractive and greenest Heck catalysts available. We provide here the visualized protocols for the ligand and catalyst syntheses as well as the reaction protocol for Heck reactions performed at 10 mmol scale at 100 °C and show that this catalyst is suitable for its use in organic syntheses.
Chemistry, Issue 85, Heck reaction, C-C cross-coupling, Catalysis, Catalysts, green chemistry, Palladium, Aminophosphines, Palladium nanoparticles, Reaction mechanism, water-induced ligand degradation
51444
Play Button
Hot Biological Catalysis: Isothermal Titration Calorimetry to Characterize Enzymatic Reactions
Authors: Luca Mazzei, Stefano Ciurli, Barbara Zambelli.
Institutions: University of Bologna.
Isothermal titration calorimetry (ITC) is a well-described technique that measures the heat released or absorbed during a chemical reaction, using it as an intrinsic probe to characterize virtually every chemical process. Nowadays, this technique is extensively applied to determine thermodynamic parameters of biomolecular binding equilibria. In addition, ITC has been demonstrated to be able of directly measuring kinetics and thermodynamic parameters (kcat, KM, ΔH) of enzymatic reactions, even though this application is still underexploited. As heat changes spontaneously occur during enzymatic catalysis, ITC does not require any modification or labeling of the system under analysis and can be performed in solution. Moreover, the method needs little amount of material. These properties make ITC an invaluable, powerful and unique tool to study enzyme kinetics in several applications, such as, for example, drug discovery. In this work an experimental ITC-based method to quantify kinetics and thermodynamics of enzymatic reactions is thoroughly described. This method is applied to determine kcat and KM of the enzymatic hydrolysis of urea by Canavalia ensiformis (jack bean) urease. Calculation of intrinsic molar enthalpy (ΔHint) of the reaction is performed. The values thus obtained are consistent with previous data reported in literature, demonstrating the reliability of the methodology.
Chemistry, Issue 86, Isothermal titration calorimetry, enzymatic catalysis, kinetics, thermodynamics, enthalpy, Michaelis constant, catalytic rate constant, urease
51487
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
51503
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Measuring the Kinetics of mRNA Transcription in Single Living Cells
Authors: Yehuda Brody, Yaron Shav-Tal.
Institutions: Bar-Ilan University.
The transcriptional activity of RNA polymerase II (Pol II) is a dynamic process and therefore measuring the kinetics of the transcriptional process in vivo is of importance. Pol II kinetics have been measured using biochemical or molecular methods.1-3 In recent years, with the development of new visualization methods, it has become possible to follow transcription as it occurs in real time in single living cells.4 Herein we describe how to perform analysis of Pol II elongation kinetics on a specific gene in living cells.5, 6 Using a cell line in which a specific gene locus (DNA), its mRNA product, and the final protein product can be fluorescently labeled and visualized in vivo, it is possible to detect the actual transcription of mRNAs on the gene of interest.7, 8 The mRNA is fluorescently tagged using the MS2 system for tagging mRNAs in vivo, where the 3'UTR of the mRNA transcripts contain 24 MS2 stem-loop repeats, which provide highly specific binding sites for the YFP-MS2 coat protein that labels the mRNA as it is transcribed.9 To monitor the kinetics of transcription we use the Fluorescence Recovery After Photobleaching (FRAP) method. By photobleaching the YFP-MS2-tagged nascent transcripts at the site of transcription and then following the recovery of this signal over time, we obtain the synthesis rate of the newly made mRNAs.5 In other words, YFP-MS2 fluorescence recovery reflects the generation of new MS2 stem-loops in the nascent transcripts and their binding by fluorescent free YFP-MS2 molecules entering from the surrounding nucleoplasm. The FRAP recovery curves are then analyzed using mathematical mechanistic models formalized by a series of differential equations, in order to retrieve the kinetic time parameters of transcription.
Cell Biology, Issue 54, mRNA transcription, nucleus, live-cell imaging, cellular dynamics, FRAP
2898
Play Button
Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro
Authors: Aya D. Pusic, Yelena Y. Grinberg, Heidi M. Mitchell, Richard P. Kraig.
Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center.
Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic and modulate SD immune signaling.
Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma
2910
Play Button
Direct Detection of the Acetate-forming Activity of the Enzyme Acetate Kinase
Authors: Matthew L. Fowler, Cheryl J. Ingram-Smith, Kerry S. Smith.
Institutions: Clemson University.
Acetate kinase, a member of the acetate and sugar kinase-Hsp70-actin (ASKHA) enzyme superfamily1-5, is responsible for the reversible phosphorylation of acetate to acetyl phosphate utilizing ATP as a substrate. Acetate kinases are ubiquitous in the Bacteria, found in one genus of Archaea, and are also present in microbes of the Eukarya6. The most well characterized acetate kinase is that from the methane-producing archaeon Methanosarcina thermophila7-14. An acetate kinase which can only utilize PPi but not ATP in the acetyl phosphate-forming direction has been isolated from Entamoeba histolytica, the causative agent of amoebic dysentery, and has thus far only been found in this genus15,16. In the direction of acetyl phosphate formation, acetate kinase activity is typically measured using the hydroxamate assay, first described by Lipmann17-20, a coupled assay in which conversion of ATP to ADP is coupled to oxidation of NADH to NAD+ by the enzymes pyruvate kinase and lactate dehydrogenase21,22, or an assay measuring release of inorganic phosphate after reaction of the acetyl phosphate product with hydroxylamine23. Activity in the opposite, acetate-forming direction is measured by coupling ATP formation from ADP to the reduction of NADP+ to NADPH by the enzymes hexokinase and glucose 6-phosphate dehydrogenase24. Here we describe a method for the detection of acetate kinase activity in the direction of acetate formation that does not require coupling enzymes, but is instead based on direct determination of acetyl phosphate consumption. After the enzymatic reaction, remaining acetyl phosphate is converted to a ferric hydroxamate complex that can be measured spectrophotometrically, as for the hydroxamate assay. Thus, unlike the standard coupled assay for this direction that is dependent on the production of ATP from ADP, this direct assay can be used for acetate kinases that produce ATP or PPi.
Molecular Biology, Issue 58, Acetate kinase, acetate, acetyl phosphate, pyrophosphate, PPi, ATP
3474
Play Button
Monitoring the Reductive and Oxidative Half-Reactions of a Flavin-Dependent Monooxygenase using Stopped-Flow Spectrophotometry
Authors: Elvira Romero, Reeder Robinson, Pablo Sobrado.
Institutions: Virginia Polytechnic Institute and State University.
Aspergillus fumigatus siderophore A (SidA) is an FAD-containing monooxygenase that catalyzes the hydroxylation of ornithine in the biosynthesis of hydroxamate siderophores that are essential for virulence (e.g. ferricrocin or N',N",N'''-triacetylfusarinine C)1. The reaction catalyzed by SidA can be divided into reductive and oxidative half-reactions (Scheme 1). In the reductive half-reaction, the oxidized FAD bound to Af SidA, is reduced by NADPH2,3. In the oxidative half-reaction, the reduced cofactor reacts with molecular oxygen to form a C4a-hydroperoxyflavin intermediate, which transfers an oxygen atom to ornithine. Here, we describe a procedure to measure the rates and detect the different spectral forms of SidA using a stopped-flow instrument installed in an anaerobic glove box. In the stopped-flow instrument, small volumes of reactants are rapidly mixed, and after the flow is stopped by the stop syringe (Figure 1), the spectral changes of the solution placed in the observation cell are recorded over time. In the first part of the experiment, we show how we can use the stopped-flow instrument in single mode, where the anaerobic reduction of the flavin in Af SidA by NADPH is directly measured. We then use double mixing settings where Af SidA is first anaerobically reduced by NADPH for a designated period of time in an aging loop, and then reacted with molecular oxygen in the observation cell (Figure 1). In order to perform this experiment, anaerobic buffers are necessary because when only the reductive half-reaction is monitored, any oxygen in the solutions will react with the reduced flavin cofactor and form a C4a-hydroperoxyflavin intermediate that will ultimately decay back into the oxidized flavin. This would not allow the user to accurately measure rates of reduction since there would be complete turnover of the enzyme. When the oxidative half-reaction is being studied the enzyme must be reduced in the absence of oxygen so that just the steps between reduction and oxidation are observed. One of the buffers used in this experiment is oxygen saturated so that we can study the oxidative half-reaction at higher concentrations of oxygen. These are often the procedures carried out when studying either the reductive or oxidative half-reactions with flavin-containing monooxygenases. The time scale of the pre-steady-state experiments performed with the stopped-flow is milliseconds to seconds, which allow the determination of intrinsic rate constants and the detection and identification of intermediates in the reaction4. The procedures described here can be applied to other flavin-dependent monooxygenases.5,6
Bioengineering, Issue 61, Stopped-flow, kinetic mechanism, SidA, C4a-hydroperoxyflavin, monooxygenase, Aspergillus fumigatus
3803
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
3998
Play Button
A Toolkit to Enable Hydrocarbon Conversion in Aqueous Environments
Authors: Eva K. Brinkman, Kira Schipper, Nadine Bongaerts, Mathias J. Voges, Alessandro Abate, S. Aljoscha Wahl.
Institutions: Delft University of Technology, Delft University of Technology.
This work puts forward a toolkit that enables the conversion of alkanes by Escherichia coli and presents a proof of principle of its applicability. The toolkit consists of multiple standard interchangeable parts (BioBricks)9 addressing the conversion of alkanes, regulation of gene expression and survival in toxic hydrocarbon-rich environments. A three-step pathway for alkane degradation was implemented in E. coli to enable the conversion of medium- and long-chain alkanes to their respective alkanols, alkanals and ultimately alkanoic-acids. The latter were metabolized via the native β-oxidation pathway. To facilitate the oxidation of medium-chain alkanes (C5-C13) and cycloalkanes (C5-C8), four genes (alkB2, rubA3, rubA4and rubB) of the alkane hydroxylase system from Gordonia sp. TF68,21 were transformed into E. coli. For the conversion of long-chain alkanes (C15-C36), theladA gene from Geobacillus thermodenitrificans was implemented. For the required further steps of the degradation process, ADH and ALDH (originating from G. thermodenitrificans) were introduced10,11. The activity was measured by resting cell assays. For each oxidative step, enzyme activity was observed. To optimize the process efficiency, the expression was only induced under low glucose conditions: a substrate-regulated promoter, pCaiF, was used. pCaiF is present in E. coli K12 and regulates the expression of the genes involved in the degradation of non-glucose carbon sources. The last part of the toolkit - targeting survival - was implemented using solvent tolerance genes, PhPFDα and β, both from Pyrococcus horikoshii OT3. Organic solvents can induce cell stress and decreased survivability by negatively affecting protein folding. As chaperones, PhPFDα and β improve the protein folding process e.g. under the presence of alkanes. The expression of these genes led to an improved hydrocarbon tolerance shown by an increased growth rate (up to 50%) in the presences of 10% n-hexane in the culture medium were observed. Summarizing, the results indicate that the toolkit enables E. coli to convert and tolerate hydrocarbons in aqueous environments. As such, it represents an initial step towards a sustainable solution for oil-remediation using a synthetic biology approach.
Bioengineering, Issue 68, Microbiology, Biochemistry, Chemistry, Chemical Engineering, Oil remediation, alkane metabolism, alkane hydroxylase system, resting cell assay, prefoldin, Escherichia coli, synthetic biology, homologous interaction mapping, mathematical model, BioBrick, iGEM
4182
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Preparation and Use of Samarium Diiodide (SmI2) in Organic Synthesis: The Mechanistic Role of HMPA and Ni(II) Salts in the Samarium Barbier Reaction
Authors: Dhandapani V. Sadasivam, Kimberly A. Choquette, Robert A. Flowers II.
Institutions: Lehigh University .
Although initially considered an esoteric reagent, SmI2 has become a common tool for synthetic organic chemists. SmI2 is generated through the addition of molecular iodine to samarium metal in THF.1,2-3 It is a mild and selective single electron reductant and its versatility is a result of its ability to initiate a wide range of reductions including C-C bond-forming and cascade or sequential reactions. SmI2 can reduce a variety of functional groups including sulfoxides and sulfones, phosphine oxides, epoxides, alkyl and aryl halides, carbonyls, and conjugated double bonds.2-12 One of the fascinating features of SmI-2-mediated reactions is the ability to manipulate the outcome of reactions through the selective use of cosolvents or additives. In most instances, additives are essential in controlling the rate of reduction and the chemo- or stereoselectivity of reactions.13-14 Additives commonly utilized to fine tune the reactivity of SmI2 can be classified into three major groups: (1) Lewis bases (HMPA, other electron-donor ligands, chelating ethers, etc.), (2) proton sources (alcohols, water etc.), and (3) inorganic additives (Ni(acac)2, FeCl3, etc).3 Understanding the mechanism of SmI2 reactions and the role of the additives enables utilization of the full potential of the reagent in organic synthesis. The Sm-Barbier reaction is chosen to illustrate the synthetic importance and mechanistic role of two common additives: HMPA and Ni(II) in this reaction. The Sm-Barbier reaction is similar to the traditional Grignard reaction with the only difference being that the alkyl halide, carbonyl, and Sm reductant are mixed simultaneously in one pot.1,15 Examples of Sm-mediated Barbier reactions with a range of coupling partners have been reported,1,3,7,10,12 and have been utilized in key steps of the synthesis of large natural products.16,17 Previous studies on the effect of additives on SmI2 reactions have shown that HMPA enhances the reduction potential of SmI2 by coordinating to the samarium metal center, producing a more powerful,13-14,18 sterically encumbered reductant19-21 and in some cases playing an integral role in post electron-transfer steps facilitating subsequent bond-forming events.22 In the Sm-Barbier reaction, HMPA has been shown to additionally activate the alkyl halide by forming a complex in a pre-equilibrium step.23 Ni(II) salts are a catalytic additive used frequently in Sm-mediated transformations.24-27 Though critical for success, the mechanistic role of Ni(II) was not known in these reactions. Recently it has been shown that SmI2 reduces Ni(II) to Ni(0), and the reaction is then carried out through organometallic Ni(0) chemistry.28 These mechanistic studies highlight that although the same Barbier product is obtained, the use of different additives in the SmI2 reaction drastically alters the mechanistic pathway of the reaction. The protocol for running these SmI2-initiated reactions is described.
Chemistry, Issue 72, Organic Chemistry, Chemical Engineering, Biochemistry, Samarium diiodide, Sml2, Samarium-Barbier Reaction, HMPA, hexamethylphosphoramide, Ni(II), Nickel(II) acetylacetonate, nickel, samarium, iodine, additives, synthesis, catalyst, reaction, synthetic organic chemistry
4323
Play Button
Quantitative FRET (Förster Resonance Energy Transfer) Analysis for SENP1 Protease Kinetics Determination
Authors: Yan Liu, Jiayu Liao.
Institutions: University of California, Riverside .
Reversible posttranslational modifications of proteins with ubiquitin or ubiquitin-like proteins (Ubls) are widely used to dynamically regulate protein activity and have diverse roles in many biological processes. For example, SUMO covalently modifies a large number or proteins with important roles in many cellular processes, including cell-cycle regulation, cell survival and death, DNA damage response, and stress response 1-5. SENP, as SUMO-specific protease, functions as an endopeptidase in the maturation of SUMO precursors or as an isopeptidase to remove SUMO from its target proteins and refresh the SUMOylation cycle 1,3,6,7. The catalytic efficiency or specificity of an enzyme is best characterized by the ratio of the kinetic constants, kcat/KM. In several studies, the kinetic parameters of SUMO-SENP pairs have been determined by various methods, including polyacrylamide gel-based western-blot, radioactive-labeled substrate, fluorescent compound or protein labeled substrate 8-13. However, the polyacrylamide-gel-based techniques, which used the "native" proteins but are laborious and technically demanding, that do not readily lend themselves to detailed quantitative analysis. The obtained kcat/KM from studies using tetrapeptides or proteins with an ACC (7-amino-4-carbamoylmetylcoumarin) or AMC (7-amino-4-methylcoumarin) fluorophore were either up to two orders of magnitude lower than the natural substrates or cannot clearly differentiate the iso- and endopeptidase activities of SENPs. Recently, FRET-based protease assays were used to study the deubiquitinating enzymes (DUBs) or SENPs with the FRET pair of cyan fluorescent protein (CFP) and yellow fluorescent protein (YFP) 9,10,14,15. The ratio of acceptor emission to donor emission was used as the quantitative parameter for FRET signal monitor for protease activity determination. However, this method ignored signal cross-contaminations at the acceptor and donor emission wavelengths by acceptor and donor self-fluorescence and thus was not accurate. We developed a novel highly sensitive and quantitative FRET-based protease assay for determining the kinetic parameters of pre-SUMO1 maturation by SENP1. An engineered FRET pair CyPet and YPet with significantly improved FRET efficiency and fluorescence quantum yield, were used to generate the CyPet-(pre-SUMO1)-YPet substrate 16. We differentiated and quantified absolute fluorescence signals contributed by the donor and acceptor and FRET at the acceptor and emission wavelengths, respectively. The value of kcat/KM was obtained as (3.2 ± 0.55) x107 M-1s-1 of SENP1 toward pre-SUMO1, which is in agreement with general enzymatic kinetic parameters. Therefore, this methodology is valid and can be used as a general approach to characterize other proteases as well.
Bioengineering, Issue 72, Biochemistry, Molecular Biology, Proteins, Quantitative FRET analysis, QFRET, enzyme kinetics analysis, SENP, SUMO, plasmid, protein expression, protein purification, protease assay, quantitative analysis
4430
Play Button
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Authors: Katharina L. Dürr, Neslihan N. Tavraz, Susan Spiller, Thomas Friedrich.
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+,K+-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+,K+-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+ or Li+ transport by Na+,K+- or gastric H+,K+-ATPase in single cells. Using Xenopus oocytes as expression system, we determine the amount of Rb+ (Li+) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+ uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g. to quantitatively determine the 3Na+/2K+ transport stoichiometry of the Na+,K+-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+,K+-ATPase. In principle, the assay is not limited to K+-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
50201
Play Button
Born Normalization for Fluorescence Optical Projection Tomography for Whole Heart Imaging
Authors: Claudio Vinegoni, Daniel Razansky, Jose-Luiz Figueiredo, Lyuba Fexon, Misha Pivovarov, Matthias Nahrendorf, Vasilis Ntziachristos, Ralph Weissleder.
Institutions: Harvard Medical School, MGH - Massachusetts General Hospital, Technical University of Munich and Helmholtz Center Munich.
Optical projection tomography is a three-dimensional imaging technique that has been recently introduced as an imaging tool primarily in developmental biology and gene expression studies. The technique renders biological sample optically transparent by first dehydrating them and then placing in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution). The technique renders biological samples optically transparent by first dehydrating them in graded ethanol solutions then placing them in a mixture of benzyl alcohol and benzyl benzoate in a 2:1 ratio (BABB or Murray s Clear solution) to clear. After the clearing process the scattering contribution in the sample can be greatly reduced and made almost negligible while the absorption contribution cannot be eliminated completely. When trying to reconstruct the fluorescence distribution within the sample under investigation, this contribution affects the reconstructions and leads, inevitably, to image artifacts and quantification errors.. While absorption could be reduced further with a permanence of weeks or months in the clearing media, this will lead to progressive loss of fluorescence and to an unrealistically long sample processing time. This is true when reconstructing both exogenous contrast agents (molecular contrast agents) as well as endogenous contrast (e.g. reconstructions of genetically expressed fluorescent proteins).
Bioengineering, Issue 28, optical imaging, fluorescence imaging, optical projection tomography, born normalization, molecular imaging, heart imaging
1389
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.