JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
The persistence of Mode 1 technology in the Korean Late Paleolithic.
PUBLISHED: 01-01-2013
Ssangjungri (SJ), an open-air site with several Paleolithic horizons, was recently discovered in South Korea. Most of the identified artifacts are simple core and flake tools that indicate an expedient knapping strategy. Bifacially worked core tools, which might be considered non-classic bifaces, also have been found. The prolific horizons at the site were dated by accelerator mass spectrometry (AMS) to about 30 kya. Another newly discovered Paleolithic open-air site, Jeungsan (JS), shows a homogeneous lithic pattern during this period. The dominated artifact types and usage of raw materials are similar in character to those from SJ, although JS yielded a larger number of simple core and flake tools with non-classic bifaces. Chronometric analysis by AMS and optically stimulated luminescence (OSL) indicate that the prime stratigraphic levels at JS also date to approximately 30 kya, and the numerous conjoining pieces indicate that the layers were not seriously affected by post-depositional processes. Thus, it can be confirmed that simple core and flake tools were produced at temporally and culturally independent sites until after 30 kya, supporting the hypothesis of a wide and persistent use of simple technology into the Late Pleistocene.
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Published: 10-15-2014
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
29 Related JoVE Articles!
Play Button
Application of MassSQUIRM for Quantitative Measurements of Lysine Demethylase Activity
Authors: Lauren P. Blair, Nathan L. Avaritt, Alan J. Tackett.
Institutions: University of Arkansas for Medical Sciences .
Recently, epigenetic regulators have been discovered as key players in many different diseases 1-3. As a result, these enzymes are prime targets for small molecule studies and drug development 4. Many epigenetic regulators have only recently been discovered and are still in the process of being classified. Among these enzymes are lysine demethylases which remove methyl groups from lysines on histones and other proteins. Due to the novel nature of this class of enzymes, few assays have been developed to study their activity. This has been a road block to both the classification and high throughput study of histone demethylases. Currently, very few demethylase assays exist. Those that do exist tend to be qualitative in nature and cannot simultaneously discern between the different lysine methylation states (un-, mono-, di- and tri-). Mass spectrometry is commonly used to determine demethylase activity but current mass spectrometric assays do not address whether differentially methylated peptides ionize differently. Differential ionization of methylated peptides makes comparing methylation states difficult and certainly not quantitative (Figure 1A). Thus available assays are not optimized for the comprehensive analysis of demethylase activity. Here we describe a method called MassSQUIRM (mass spectrometric quantitation using isotopic reductive methylation) that is based on reductive methylation of amine groups with deuterated formaldehyde to force all lysines to be di-methylated, thus making them essentially the same chemical species and therefore ionize the same (Figure 1B). The only chemical difference following the reductive methylation is hydrogen and deuterium, which does not affect MALDI ionization efficiencies. The MassSQUIRM assay is specific for demethylase reaction products with un-, mono- or di-methylated lysines. The assay is also applicable to lysine methyltransferases giving the same reaction products. Here, we use a combination of reductive methylation chemistry and MALDI mass spectrometry to measure the activity of LSD1, a lysine demethylase capable of removing di- and mono-methyl groups, on a synthetic peptide substrate 5. This assay is simple and easily amenable to any lab with access to a MALDI mass spectrometer in lab or through a proteomics facility. The assay has ~8-fold dynamic range and is readily scalable to plate format 5.
Molecular Biology, Issue 61, LSD1, lysine demethylase, mass spectrometry, reductive methylation, demethylase quantification
Play Button
A Contusion Model of Severe Spinal Cord Injury in Rats
Authors: Vibhor Krishna, Hampton Andrews, Xing Jin, Jin Yu, Abhay Varma, Xuejun Wen, Mark Kindy.
Institutions: Medical University of South Carolina, Clemson University, Clemson-MUSC Bioengineering Joint Program.
The translational potential of novel treatments should be investigated in severe spinal cord injury (SCI) contusion models. A detailed methodology is described to obtain a consistent model of severe SCI. Use of a stereotactic frame and computer controlled impactor allows for creation of reproducible injury. Hypothermia and urinary tract infection pose significant challenges in the post-operative period. Careful monitoring of animals with daily weight recording and bladder expression allows for early detection of post-operative complications. The functional results of this contusion model are equivalent to transection models. The contusion model can be utilized to evaluate the efficacy of both neuroprotective and neuroregenerative approaches.
Biomedical Engineering, Issue 78, Medicine, Neurobiology, Neuroscience, Anatomy, Physiology, Surgery, Cerebrovascular Trauma, Spinal Cord Injuries, spinal cord injury model, contusion spinal cord injury, spinal cord contusion, translational spinal cord injury model, animal model
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Play Button
Flexural Rigidity Measurements of Biopolymers Using Gliding Assays
Authors: Douglas S. Martin, Lu Yu, Brian L. Van Hoozen.
Institutions: Lawrence University.
Microtubules are cytoskeletal polymers which play a role in cell division, cell mechanics, and intracellular transport. Each of these functions requires microtubules that are stiff and straight enough to span a significant fraction of the cell diameter. As a result, the microtubule persistence length, a measure of stiffness, has been actively studied for the past two decades1. Nonetheless, open questions remain: short microtubules are 10-50 times less stiff than long microtubules2-4, and even long microtubules have measured persistence lengths which vary by an order of magnitude5-9. Here, we present a method to measure microtubule persistence length. The method is based on a kinesin-driven microtubule gliding assay10. By combining sparse fluorescent labeling of individual microtubules with single particle tracking of individual fluorophores attached to the microtubule, the gliding trajectories of single microtubules are tracked with nanometer-level precision. The persistence length of the trajectories is the same as the persistence length of the microtubule under the conditions used11. An automated tracking routine is used to create microtubule trajectories from fluorophores attached to individual microtubules, and the persistence length of this trajectory is calculated using routines written in IDL. This technique is rapidly implementable, and capable of measuring the persistence length of 100 microtubules in one day of experimentation. The method can be extended to measure persistence length under a variety of conditions, including persistence length as a function of length along microtubules. Moreover, the analysis routines used can be extended to myosin-based acting gliding assays, to measure the persistence length of actin filaments as well.
Biophysics, Issue 69, Bioengineering, Physics, Molecular Biology, Cellular Biology, microtubule, persistence length, flexural rigidity, gliding assay, mechanics, cytoskeleton, actin
Play Button
ScanLag: High-throughput Quantification of Colony Growth and Lag Time
Authors: Irit Levin-Reisman, Ofer Fridman, Nathalie Q. Balaban.
Institutions: The Hebrew University of Jerusalem.
Growth dynamics are fundamental characteristics of microorganisms. Quantifying growth precisely is an important goal in microbiology. Growth dynamics are affected both by the doubling time of the microorganism and by any delay in growth upon transfer from one condition to another, the lag. The ScanLag method enables the characterization of these two independent properties at the level of colonies originating each from a single cell, generating a two-dimensional distribution of the lag time and of the growth time. In ScanLag, measurement of the time it takes for colonies on conventional nutrient agar plates to be detected is automated on an array of commercial scanners controlled by an in house application. Petri dishes are placed on the scanners, and the application acquires images periodically. Automated analysis of colony growth is then done by an application that returns the appearance time and growth rate of each colony. Other parameters, such as the shape, texture and color of the colony, can be extracted for multidimensional mapping of sub-populations of cells. Finally, the method enables the retrieval of rare variants with specific growth phenotypes for further characterization. The technique could be applied in bacteriology for the identification of long lag that can cause persistence to antibiotics, as well as a general low cost technique for phenotypic screens.
Immunology, Issue 89, lag, growth rate, growth delay, single cell, scanners, image analysis, persistence, resistance, rare mutants, phenotypic screens, phenomics
Play Button
A Mouse Model for Pathogen-induced Chronic Inflammation at Local and Systemic Sites
Authors: George Papadopoulos, Carolyn D. Kramer, Connie S. Slocum, Ellen O. Weinberg, Ning Hua, Cynthia V. Gudino, James A. Hamilton, Caroline A. Genco.
Institutions: Boston University School of Medicine, Boston University School of Medicine.
Chronic inflammation is a major driver of pathological tissue damage and a unifying characteristic of many chronic diseases in humans including neoplastic, autoimmune, and chronic inflammatory diseases. Emerging evidence implicates pathogen-induced chronic inflammation in the development and progression of chronic diseases with a wide variety of clinical manifestations. Due to the complex and multifactorial etiology of chronic disease, designing experiments for proof of causality and the establishment of mechanistic links is nearly impossible in humans. An advantage of using animal models is that both genetic and environmental factors that may influence the course of a particular disease can be controlled. Thus, designing relevant animal models of infection represents a key step in identifying host and pathogen specific mechanisms that contribute to chronic inflammation. Here we describe a mouse model of pathogen-induced chronic inflammation at local and systemic sites following infection with the oral pathogen Porphyromonas gingivalis, a bacterium closely associated with human periodontal disease. Oral infection of specific-pathogen free mice induces a local inflammatory response resulting in destruction of tooth supporting alveolar bone, a hallmark of periodontal disease. In an established mouse model of atherosclerosis, infection with P. gingivalis accelerates inflammatory plaque deposition within the aortic sinus and innominate artery, accompanied by activation of the vascular endothelium, an increased immune cell infiltrate, and elevated expression of inflammatory mediators within lesions. We detail methodologies for the assessment of inflammation at local and systemic sites. The use of transgenic mice and defined bacterial mutants makes this model particularly suitable for identifying both host and microbial factors involved in the initiation, progression, and outcome of disease. Additionally, the model can be used to screen for novel therapeutic strategies, including vaccination and pharmacological intervention.
Immunology, Issue 90, Pathogen-Induced Chronic Inflammation; Porphyromonas gingivalis; Oral Bone Loss; Periodontal Disease; Atherosclerosis; Chronic Inflammation; Host-Pathogen Interaction; microCT; MRI
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
Play Button
Method for Simultaneous fMRI/EEG Data Collection during a Focused Attention Suggestion for Differential Thermal Sensation
Authors: Pamela K. Douglas, Maureen Pisani, Rory Reid, Austin Head, Edward Lau, Ebrahim Mirakhor, Jennifer Bramen, Billi Gordon, Ariana Anderson, Wesley T. Kerr, Chajoon Cheong, Mark S. Cohen.
Institutions: University of California, Los Angeles, University of California, Los Angeles, Yale School of Medicine, Korean Basic Science Institute.
In the present work, we demonstrate a method for concurrent collection of EEG/fMRI data. In our setup, EEG data are collected using a high-density 256-channel sensor net. The EEG amplifier itself is contained in a field isolation containment system (FICS), and MRI clock signals are synchronized with EEG data collection for subsequent MR artifact characterization and removal. We demonstrate this method first for resting state data collection. Thereafter, we demonstrate a protocol for EEG/fMRI data recording, while subjects listen to a tape asking them to visualize that their left hand is immersed in a cold-water bath and referred to, here, as the cold glove paradigm. Thermal differentials between each hand are measured throughout EEG/fMRI data collection using an MR compatible temperature sensor that we developed for this purpose. We collect cold glove EEG/fMRI data along with simultaneous differential hand temperature measurements both before and after hypnotic induction. Between pre and post sessions, single modality EEG data are collected during the hypnotic induction and depth assessment process. Our representative results demonstrate that significant changes in the EEG power spectrum can be measured during hypnotic induction, and that hand temperature changes during the cold glove paradigm can be detected rapidly using our MR compatible differential thermometry device.
Behavior, Issue 83, hypnosis, EEG, fMRI, MRI, cold glove, MRI compatible, temperature sensor
Play Button
Super-resolution Imaging of the Cytokinetic Z Ring in Live Bacteria Using Fast 3D-Structured Illumination Microscopy (f3D-SIM)
Authors: Lynne Turnbull, Michael P. Strauss, Andrew T. F. Liew, Leigh G. Monahan, Cynthia B. Whitchurch, Elizabeth J. Harry.
Institutions: University of Technology, Sydney.
Imaging of biological samples using fluorescence microscopy has advanced substantially with new technologies to overcome the resolution barrier of the diffraction of light allowing super-resolution of live samples. There are currently three main types of super-resolution techniques – stimulated emission depletion (STED), single-molecule localization microscopy (including techniques such as PALM, STORM, and GDSIM), and structured illumination microscopy (SIM). While STED and single-molecule localization techniques show the largest increases in resolution, they have been slower to offer increased speeds of image acquisition. Three-dimensional SIM (3D-SIM) is a wide-field fluorescence microscopy technique that offers a number of advantages over both single-molecule localization and STED. Resolution is improved, with typical lateral and axial resolutions of 110 and 280 nm, respectively and depth of sampling of up to 30 µm from the coverslip, allowing for imaging of whole cells. Recent advancements (fast 3D-SIM) in the technology increasing the capture rate of raw images allows for fast capture of biological processes occurring in seconds, while significantly reducing photo-toxicity and photobleaching. Here we describe the use of one such method to image bacterial cells harboring the fluorescently-labelled cytokinetic FtsZ protein to show how cells are analyzed and the type of unique information that this technique can provide.
Molecular Biology, Issue 91, super-resolution microscopy, fluorescence microscopy, OMX, 3D-SIM, Blaze, cell division, bacteria, Bacillus subtilis, Staphylococcus aureus, FtsZ, Z ring constriction
Play Button
Genetic Manipulation in Δku80 Strains for Functional Genomic Analysis of Toxoplasma gondii
Authors: Leah M. Rommereim, Miryam A. Hortua Triana, Alejandra Falla, Kiah L. Sanders, Rebekah B. Guevara, David J. Bzik, Barbara A. Fox.
Institutions: The Geisel School of Medicine at Dartmouth.
Targeted genetic manipulation using homologous recombination is the method of choice for functional genomic analysis to obtain a detailed view of gene function and phenotype(s). The development of mutant strains with targeted gene deletions, targeted mutations, complemented gene function, and/or tagged genes provides powerful strategies to address gene function, particularly if these genetic manipulations can be efficiently targeted to the gene locus of interest using integration mediated by double cross over homologous recombination. Due to very high rates of nonhomologous recombination, functional genomic analysis of Toxoplasma gondii has been previously limited by the absence of efficient methods for targeting gene deletions and gene replacements to specific genetic loci. Recently, we abolished the major pathway of nonhomologous recombination in type I and type II strains of T. gondii by deleting the gene encoding the KU80 protein1,2. The Δku80 strains behave normally during tachyzoite (acute) and bradyzoite (chronic) stages in vitro and in vivo and exhibit essentially a 100% frequency of homologous recombination. The Δku80 strains make functional genomic studies feasible on the single gene as well as on the genome scale1-4. Here, we report methods for using type I and type II Δku80Δhxgprt strains to advance gene targeting approaches in T. gondii. We outline efficient methods for generating gene deletions, gene replacements, and tagged genes by targeted insertion or deletion of the hypoxanthine-xanthine-guanine phosphoribosyltransferase (HXGPRT) selectable marker. The described gene targeting protocol can be used in a variety of ways in Δku80 strains to advance functional analysis of the parasite genome and to develop single strains that carry multiple targeted genetic manipulations. The application of this genetic method and subsequent phenotypic assays will reveal fundamental and unique aspects of the biology of T. gondii and related significant human pathogens that cause malaria (Plasmodium sp.) and cryptosporidiosis (Cryptosporidium).
Infectious Diseases, Issue 77, Genetics, Microbiology, Infection, Medicine, Immunology, Molecular Biology, Cellular Biology, Biomedical Engineering, Bioengineering, Genomics, Parasitology, Pathology, Apicomplexa, Coccidia, Toxoplasma, Genetic Techniques, Gene Targeting, Eukaryota, Toxoplasma gondii, genetic manipulation, gene targeting, gene deletion, gene replacement, gene tagging, homologous recombination, DNA, sequencing
Play Button
Evaluating Plasmonic Transport in Current-carrying Silver Nanowires
Authors: Mingxia Song, Arnaud Stolz, Douguo Zhang, Juan Arocas, Laurent Markey, Gérard Colas des Francs, Erik Dujardin, Alexandre Bouhelier.
Institutions: Université de Bourgogne, University of Science and Technology of China, CEMES, CNRS-UPR 8011.
Plasmonics is an emerging technology capable of simultaneously transporting a plasmonic signal and an electronic signal on the same information support1,2,3. In this context, metal nanowires are especially desirable for realizing dense routing networks4. A prerequisite to operate such shared nanowire-based platform relies on our ability to electrically contact individual metal nanowires and efficiently excite surface plasmon polaritons5 in this information support. In this article, we describe a protocol to bring electrical terminals to chemically-synthesized silver nanowires6 randomly distributed on a glass substrate7. The positions of the nanowire ends with respect to predefined landmarks are precisely located using standard optical transmission microscopy before encapsulation in an electron-sensitive resist. Trenches representing the electrode layout are subsequently designed by electron-beam lithography. Metal electrodes are then fabricated by thermally evaporating a Cr/Au layer followed by a chemical lift-off. The contacted silver nanowires are finally transferred to a leakage radiation microscope for surface plasmon excitation and characterization8,9. Surface plasmons are launched in the nanowires by focusing a near infrared laser beam on a diffraction-limited spot overlapping one nanowire extremity5,9. For sufficiently large nanowires, the surface plasmon mode leaks into the glass substrate9,10. This leakage radiation is readily detected, imaged, and analyzed in the different conjugate planes in leakage radiation microscopy9,11. The electrical terminals do not affect the plasmon propagation. However, a current-induced morphological deterioration of the nanowire drastically degrades the flow of surface plasmons. The combination of surface plasmon leakage radiation microscopy with a simultaneous analysis of the nanowire electrical transport characteristics reveals the intrinsic limitations of such plasmonic circuitry.
Physics, Issue 82, light transmission, optical waveguides, photonics, plasma oscillations, plasma waves, electron motion in conductors, nanofabrication, Information Transport, plasmonics, Silver Nanowires, Leakage radiation microscopy, Electromigration
Play Button
Measuring Ascending Aortic Stiffness In Vivo in Mice Using Ultrasound
Authors: Maggie M. Kuo, Viachaslau Barodka, Theodore P. Abraham, Jochen Steppan, Artin A. Shoukas, Mark Butlin, Alberto Avolio, Dan E. Berkowitz, Lakshmi Santhanam.
Institutions: Johns Hopkins University, Johns Hopkins University, Johns Hopkins University, Macquarie University.
We present a protocol for measuring in vivo aortic stiffness in mice using high-resolution ultrasound imaging. Aortic diameter is measured by ultrasound and aortic blood pressure is measured invasively with a solid-state pressure catheter. Blood pressure is raised then lowered incrementally by intravenous infusion of vasoactive drugs phenylephrine and sodium nitroprusside. Aortic diameter is measured for each pressure step to characterize the pressure-diameter relationship of the ascending aorta. Stiffness indices derived from the pressure-diameter relationship can be calculated from the data collected. Calculation of arterial compliance is described in this protocol. This technique can be used to investigate mechanisms underlying increased aortic stiffness associated with cardiovascular disease and aging. The technique produces a physiologically relevant measure of stiffness compared to ex vivo approaches because physiological influences on aortic stiffness are incorporated in the measurement. The primary limitation of this technique is the measurement error introduced from the movement of the aorta during the cardiac cycle. This motion can be compensated by adjusting the location of the probe with the aortic movement as well as making multiple measurements of the aortic pressure-diameter relationship and expanding the experimental group size.
Medicine, Issue 94, Aortic stiffness, ultrasound, in vivo, aortic compliance, elastic modulus, mouse model, cardiovascular disease
Play Button
Encapsulation and Permeability Characteristics of Plasma Polymerized Hollow Particles
Authors: Anaram Shahravan, Themis Matsoukas.
Institutions: The Pennsylvania State University.
In this protocol, core-shell nanostructures are synthesized by plasma enhanced chemical vapor deposition. We produce an amorphous barrier by plasma polymerization of isopropanol on various solid substrates, including silica and potassium chloride. This versatile technique is used to treat nanoparticles and nanopowders with sizes ranging from 37 nm to 1 micron, by depositing films whose thickness can be anywhere from 1 nm to upwards of 100 nm. Dissolution of the core allows us to study the rate of permeation through the film. In these experiments, we determine the diffusion coefficient of KCl through the barrier film by coating KCL nanocrystals and subsequently monitoring the ionic conductivity of the coated particles suspended in water. The primary interest in this process is the encapsulation and delayed release of solutes. The thickness of the shell is one of the independent variables by which we control the rate of release. It has a strong effect on the rate of release, which increases from a six-hour release (shell thickness is 20 nm) to a long-term release over 30 days (shell thickness is 95 nm). The release profile shows a characteristic behavior: a fast release (35% of the final materials) during the first five minutes after the beginning of the dissolution, and a slower release till all of the core materials come out.
Physics, Issue 66, Chemical Engineering, Plasma Physics, Plasma coating, Core-shell structure, Hollow particles, Permeability, nanoparticles, nanopowders
Play Button
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Authors: F. Mark Dunning, Timothy M. Piazza, Füsûn N. Zeytin, Ward C. Tucker.
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
A Microplate Assay to Assess Chemical Effects on RBL-2H3 Mast Cell Degranulation: Effects of Triclosan without Use of an Organic Solvent
Authors: Lisa M. Weatherly, Rachel H. Kennedy, Juyoung Shim, Julie A. Gosse.
Institutions: University of Maine, Orono, University of Maine, Orono.
Mast cells play important roles in allergic disease and immune defense against parasites. Once activated (e.g. by an allergen), they degranulate, a process that results in the exocytosis of allergic mediators. Modulation of mast cell degranulation by drugs and toxicants may have positive or adverse effects on human health. Mast cell function has been dissected in detail with the use of rat basophilic leukemia mast cells (RBL-2H3), a widely accepted model of human mucosal mast cells3-5. Mast cell granule component and the allergic mediator β-hexosaminidase, which is released linearly in tandem with histamine from mast cells6, can easily and reliably be measured through reaction with a fluorogenic substrate, yielding measurable fluorescence intensity in a microplate assay that is amenable to high-throughput studies1. Originally published by Naal et al.1, we have adapted this degranulation assay for the screening of drugs and toxicants and demonstrate its use here. Triclosan is a broad-spectrum antibacterial agent that is present in many consumer products and has been found to be a therapeutic aid in human allergic skin disease7-11, although the mechanism for this effect is unknown. Here we demonstrate an assay for the effect of triclosan on mast cell degranulation. We recently showed that triclosan strongly affects mast cell function2. In an effort to avoid use of an organic solvent, triclosan is dissolved directly into aqueous buffer with heat and stirring, and resultant concentration is confirmed using UV-Vis spectrophotometry (using ε280 = 4,200 L/M/cm)12. This protocol has the potential to be used with a variety of chemicals to determine their effects on mast cell degranulation, and more broadly, their allergic potential.
Immunology, Issue 81, mast cell, basophil, degranulation, RBL-2H3, triclosan, irgasan, antibacterial, β-hexosaminidase, allergy, Asthma, toxicants, ionophore, antigen, fluorescence, microplate, UV-Vis
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Authors: Monica Soldi, Tiziana Bonaldi.
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
Play Button
A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry
Authors: Kerstin Trompelt, Janina Steinbeck, Mia Terashima, Michael Hippler.
Institutions: University of Münster, Carnegie Institution for Science.
The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii, isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14N/15N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g. used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions.
Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Play Button
Optimized Negative Staining: a High-throughput Protocol for Examining Small and Asymmetric Protein Structure by Electron Microscopy
Authors: Matthew Rames, Yadong Yu, Gang Ren.
Institutions: The Molecular Foundry.
Structural determination of proteins is rather challenging for proteins with molecular masses between 40 - 200 kDa. Considering that more than half of natural proteins have a molecular mass between 40 - 200 kDa1,2, a robust and high-throughput method with a nanometer resolution capability is needed. Negative staining (NS) electron microscopy (EM) is an easy, rapid, and qualitative approach which has frequently been used in research laboratories to examine protein structure and protein-protein interactions. Unfortunately, conventional NS protocols often generate structural artifacts on proteins, especially with lipoproteins that usually form presenting rouleaux artifacts. By using images of lipoproteins from cryo-electron microscopy (cryo-EM) as a standard, the key parameters in NS specimen preparation conditions were recently screened and reported as the optimized NS protocol (OpNS), a modified conventional NS protocol 3 . Artifacts like rouleaux can be greatly limited by OpNS, additionally providing high contrast along with reasonably high‐resolution (near 1 nm) images of small and asymmetric proteins. These high-resolution and high contrast images are even favorable for an individual protein (a single object, no average) 3D reconstruction, such as a 160 kDa antibody, through the method of electron tomography4,5. Moreover, OpNS can be a high‐throughput tool to examine hundreds of samples of small proteins. For example, the previously published mechanism of 53 kDa cholesteryl ester transfer protein (CETP) involved the screening and imaging of hundreds of samples 6. Considering cryo-EM rarely successfully images proteins less than 200 kDa has yet to publish any study involving screening over one hundred sample conditions, it is fair to call OpNS a high-throughput method for studying small proteins. Hopefully the OpNS protocol presented here can be a useful tool to push the boundaries of EM and accelerate EM studies into small protein structure, dynamics and mechanisms.
Environmental Sciences, Issue 90, small and asymmetric protein structure, electron microscopy, optimized negative staining
Play Button
Conducting Miller-Urey Experiments
Authors: Eric T. Parker, James H. Cleaves, Aaron S. Burton, Daniel P. Glavin, Jason P. Dworkin, Manshui Zhou, Jeffrey L. Bada, Facundo M. Fernández.
Institutions: Georgia Institute of Technology, Tokyo Institute of Technology, Institute for Advanced Study, NASA Johnson Space Center, NASA Goddard Space Flight Center, University of California at San Diego.
In 1953, Stanley Miller reported the production of biomolecules from simple gaseous starting materials, using an apparatus constructed to simulate the primordial Earth's atmosphere-ocean system. Miller introduced 200 ml of water, 100 mmHg of H2, 200 mmHg of CH4, and 200 mmHg of NH3 into the apparatus, then subjected this mixture, under reflux, to an electric discharge for a week, while the water was simultaneously heated. The purpose of this manuscript is to provide the reader with a general experimental protocol that can be used to conduct a Miller-Urey type spark discharge experiment, using a simplified 3 L reaction flask. Since the experiment involves exposing inflammable gases to a high voltage electric discharge, it is worth highlighting important steps that reduce the risk of explosion. The general procedures described in this work can be extrapolated to design and conduct a wide variety of electric discharge experiments simulating primitive planetary environments.
Chemistry, Issue 83, Geosciences (General), Exobiology, Miller-Urey, Prebiotic chemistry, amino acids, spark discharge
Play Button
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Authors: Gauthier Julie, Fadi F. Hamdan, Guy A. Rouleau.
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1 and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo mutations. This is the case for autism and schizophrenia3. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo mutations would more frequently come from males, particularly older males4. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
Play Button
Interview: Protein Folding and Studies of Neurodegenerative Diseases
Authors: Susan Lindquist.
Institutions: MIT - Massachusetts Institute of Technology.
In this interview, Dr. Lindquist describes relationships between protein folding, prion diseases and neurodegenerative disorders. The problem of the protein folding is at the core of the modern biology. In addition to their traditional biochemical functions, proteins can mediate transfer of biological information and therefore can be considered a genetic material. This recently discovered function of proteins has important implications for studies of human disorders. Dr. Lindquist also describes current experimental approaches to investigate the mechanism of neurodegenerative diseases based on genetic studies in model organisms.
Neuroscience, issue 17, protein folding, brain, neuron, prion, neurodegenerative disease, yeast, screen, Translational Research
Play Button
Preparation and Using Phantom Lesions to Practice Fine Needle Aspiration Biopsies
Authors: Vinod B. Shidham, George M. Varsegi, Krista D'Amore, Anjani Shidham.
Institutions: University of Wisconsin - Milwaukee, BioInnovation LLC.
Currently, health workers including residents and fellows do not have a suitable phantom model to practice the fine- needle aspiration biopsy (FNAB) procedure. In the past, we standardized a model consisting of latex glove containing fresh cattle liver for practicing FNAB. However, this model is difficult to organize and prepare on short notice, with the procurement of fresh cattle liver being the most challenging aspect. Handling of liver with contamination-related problems is also a significant draw back. In addition, the glove material leaks after a few needle passes, with resulting mess. We have established a novel simple method of embedding a small piece of sausage or banana in a commercially available silicone rubber caulk. This model allows the retention of vacuum seal and aspiration of material from the embedded specimen, resembling an actual FNAB procedure on clinical mass lesions. The aspirated material in the needle hub can be processed similar to the specimens procured during an actual FNAB procedure, facilitating additional proficiency in smear preparation and staining.
Medicine, Issue 31, FNA, FNAB, Fine Needle Aspiration Biopsy, Proficiency, procedure, Cytopathology, cytology
Play Button
Principles of Site-Specific Recombinase (SSR) Technology
Authors: Frank Bucholtz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Site-specific recombinase (SSR) technology allows the manipulation of gene structure to explore gene function and has become an integral tool of molecular biology. Site-specific recombinases are proteins that bind to distinct DNA target sequences. The Cre/lox system was first described in bacteriophages during the 1980's. Cre recombinase is a Type I topoisomerase that catalyzes site-specific recombination of DNA between two loxP (locus of X-over P1) sites. The Cre/lox system does not require any cofactors. LoxP sequences contain distinct binding sites for Cre recombinases that surround a directional core sequence where recombination and rearrangement takes place. When cells contain loxP sites and express the Cre recombinase, a recombination event occurs. Double-stranded DNA is cut at both loxP sites by the Cre recombinase, rearranged, and ligated ("scissors and glue"). Products of the recombination event depend on the relative orientation of the asymmetric sequences. SSR technology is frequently used as a tool to explore gene function. Here the gene of interest is flanked with Cre target sites loxP ("floxed"). Animals are then crossed with animals expressing the Cre recombinase under the control of a tissue-specific promoter. In tissues that express the Cre recombinase it binds to target sequences and excises the floxed gene. Controlled gene deletion allows the investigation of gene function in specific tissues and at distinct time points. Analysis of gene function employing SSR technology --- conditional mutagenesis -- has significant advantages over traditional knock-outs where gene deletion is frequently lethal.
Cellular Biology, Issue 15, Molecular Biology, Site-Specific Recombinase, Cre recombinase, Cre/lox system, transgenic animals, transgenic technology
Play Button
Homemade Site Directed Mutagenesis of Whole Plasmids
Authors: Mark Laible, Kajohn Boonrod.
Institutions: Johannes Gutenberg-University Mainz, Germany, Neustadt an der Weinstrasse, Germany.
Site directed mutagenesis of whole plasmids is a simple way to create slightly different variations of an original plasmid. With this method the cloned target gene can be altered by substitution, deletion or insertion of a few bases directly into a plasmid. It works by simply amplifying the whole plasmid, in a non PCR-based thermocycling reaction. During the reaction mutagenic primers, carrying the desired mutation, are integrated into the newly synthesized plasmid. In this video tutorial we demonstrate an easy and cost effective way to introduce base substitutions into a plasmid. The protocol works with standard reagents and is independent from commercial kits, which often are very expensive. Applying this protocol can reduce the total cost of a reaction to an eighth of what it costs using some of the commercial kits. In this video we also comment on critical steps during the process and give detailed instructions on how to design the mutagenic primers.
Basic Protocols, Issue 27, Site directed Mutagenesis, Mutagenesis, Mutation, Plasmid, Thermocycling, PCR, Pfu-Polymerase, Dpn1, cost saving
Play Button
Phase Contrast and Differential Interference Contrast (DIC) Microscopy
Authors: Victoria Centonze Frohlich.
Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA).
Phase-contrast microscopy is often used to produce contrast for transparent, non light-absorbing, biological specimens. The technique was discovered by Zernike, in 1942, who received the Nobel prize for his achievement. DIC microscopy, introduced in the late 1960s, has been popular in biomedical research because it highlights edges of specimen structural detail, provides high-resolution optical sections of thick specimens including tissue cells, eggs, and embryos and does not suffer from the phase halos typical of phase-contrast images. This protocol highlights the principles and practical applications of these microscopy techniques.
Basic protocols, Issue 18, Current Protocols Wiley, Microscopy, Phase Contrast, Difference Interference Contrast
Play Button
Molecular Evolution of the Tre Recombinase
Authors: Frank Buchholz.
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells. We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.