JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
SynChro: a fast and easy tool to reconstruct and visualize synteny blocks along eukaryotic chromosomes.
PUBLISHED: 01-01-2014
Reconstructing synteny blocks is an essential step in comparative genomics studies. Different methods were already developed to answer various needs such as genome (re-)annotation, identification of duplicated regions and whole genome duplication events or estimation of rearrangement rates. We present SynChro, a tool that reconstructs synteny blocks between pairwise comparisons of multiple genomes. SynChro is based on a simple algorithm that computes Reciprocal Best-Hits (RBH) to reconstruct the backbones of the synteny blocks and then automatically completes these blocks with non-RBH syntenic homologs. This approach has two main advantages: (i) synteny block reconstruction is fast (feasible on a desk computer for large eukaryotic genomes such as human) and (ii) synteny block reconstruction is straightforward as all steps are integrated (no need to run Blast or TribeMCL prior to reconstruction) and there is only one parameter to set up, the synteny block stringency [Formula: see text]. Benchmarks on three pairwise comparisons of genomes, representing three different levels of synteny conservation (Human/Mouse, Human/Zebra Finch and Human/Zebrafish) show that Synchro runs faster and performs at least as well as two other commonly used and more sophisticated tools (MCScanX and i-ADHoRe). In addition, SynChro provides the user with a rich set of graphical outputs including dotplots, chromosome paintings and detailed synteny maps to visualize synteny blocks with all homology relationships and synteny breakpoints with all included genetic features. SynChro is freely available under the BSD license at
Authors: Damien O'Halloran.
Published: 02-05-2014
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
24 Related JoVE Articles!
Play Button
Recombinant Retroviral Production and Infection of B Cells
Authors: Celia Keim, Veronika Grinstein, Uttiya Basu.
Institutions: Columbia University College of Physicians and Surgeons, Columbia University.
The transgenic expression of genes in eukaryotic cells is a powerful reverse genetic approach in which a gene of interest is expressed under the control of a heterologous expression system to facilitate the analysis of the resulting phenotype. This approach can be used to express a gene that is not normally found in the organism, to express a mutant form of a gene product, or to over-express a dominant-negative form of the gene product. It is particularly useful in the study of the hematopoetic system, where transcriptional regulation is a major control mechanism in the development and differentiation of B cells 1, reviewed in 2-4. Mouse genetics is a powerful tool for the study of human genes and diseases. A comparative analysis of the mouse and human genome reveals conservation of synteny in over 90% of the genome 5. Also, much of the technology used in mouse models is applicable to the study of human genes, for example, gene disruptions and allelic replacement 6. However, the creation of a transgenic mouse requires a great deal of resources of both a financial and technical nature. Several projects have begun to compile libraries of knock out mouse strains (KOMP, EUCOMM, NorCOMM) or mutagenesis induced strains (RIKEN), which require large-scale efforts and collaboration 7. Therefore, it is desirable to first study the phenotype of a desired gene in a cell culture model of primary cells before progressing to a mouse model. Retroviral DNA integrates into the host DNA, preferably within or near transcription units or CpG islands, resulting in stable and heritable expression of the packaged gene of interest while avoiding transcriptional silencing 8 9. The genes are then transcribed under the control of a high efficiency retroviral promoter, resulting in a high efficiency of transcription and protein production. Therefore, retroviral expression can be used with cells that are difficult to transfect, provided the cells are in an active state during mitosis. Because the structural genes of the virus are contained within the packaging cell line, the expression vectors used to clone the gene of interest contain no structural genes of the virus, which both eliminates the possibility of viral revertants and increases the safety of working with viral supernatants as no infectious virions are produced 10. Here we present a protocol for recombinant retroviral production and subsequent infection of splenic B cells. After isolation, the cultured splenic cells are stimulated with Th derived lymphokines and anti-CD40, which induces a burst of B cell proliferation and differentiation 11. This protocol is ideal for the study of events occurring late in B cell development and differentiation, as B cells are isolated from the spleen following initial hematopoetic events but prior to antigenic stimulation to induce plasmacytic differentiation.
Infection, Issue 48, Retroviral transduction, primary B cells
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Digital Inline Holographic Microscopy (DIHM) of Weakly-scattering Subjects
Authors: Camila B. Giuliano, Rongjing Zhang, Laurence G. Wilson.
Institutions: Harvard University, Universidade Estadual Paulista.
Weakly-scattering objects, such as small colloidal particles and most biological cells, are frequently encountered in microscopy. Indeed, a range of techniques have been developed to better visualize these phase objects; phase contrast and DIC are among the most popular methods for enhancing contrast. However, recording position and shape in the out-of-imaging-plane direction remains challenging. This report introduces a simple experimental method to accurately determine the location and geometry of objects in three dimensions, using digital inline holographic microscopy (DIHM). Broadly speaking, the accessible sample volume is defined by the camera sensor size in the lateral direction, and the illumination coherence in the axial direction. Typical sample volumes range from 200 µm x 200 µm x 200 µm using LED illumination, to 5 mm x 5 mm x 5 mm or larger using laser illumination. This illumination light is configured so that plane waves are incident on the sample. Objects in the sample volume then scatter light, which interferes with the unscattered light to form interference patterns perpendicular to the illumination direction. This image (the hologram) contains the depth information required for three-dimensional reconstruction, and can be captured on a standard imaging device such as a CMOS or CCD camera. The Rayleigh-Sommerfeld back propagation method is employed to numerically refocus microscope images, and a simple imaging heuristic based on the Gouy phase anomaly is used to identify scattering objects within the reconstructed volume. This simple but robust method results in an unambiguous, model-free measurement of the location and shape of objects in microscopic samples.
Basic Protocol, Issue 84, holography, digital inline holographic microscopy (DIHM), Microbiology, microscopy, 3D imaging, Streptococcus bacteria
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Authors: Bianca DeBenedictis, J. Bruce Morton.
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
2D and 3D Chromosome Painting in Malaria Mosquitoes
Authors: Phillip George, Atashi Sharma, Igor V Sharakhov.
Institutions: Virginia Tech.
Fluorescent in situ hybridization (FISH) of whole arm chromosome probes is a robust technique for mapping genomic regions of interest, detecting chromosomal rearrangements, and studying three-dimensional (3D) organization of chromosomes in the cell nucleus. The advent of laser capture microdissection (LCM) and whole genome amplification (WGA) allows obtaining large quantities of DNA from single cells. The increased sensitivity of WGA kits prompted us to develop chromosome paints and to use them for exploring chromosome organization and evolution in non-model organisms. Here, we present a simple method for isolating and amplifying the euchromatic segments of single polytene chromosome arms from ovarian nurse cells of the African malaria mosquito Anopheles gambiae. This procedure provides an efficient platform for obtaining chromosome paints, while reducing the overall risk of introducing foreign DNA to the sample. The use of WGA allows for several rounds of re-amplification, resulting in high quantities of DNA that can be utilized for multiple experiments, including 2D and 3D FISH. We demonstrated that the developed chromosome paints can be successfully used to establish the correspondence between euchromatic portions of polytene and mitotic chromosome arms in An. gambiae. Overall, the union of LCM and single-chromosome WGA provides an efficient tool for creating significant amounts of target DNA for future cytogenetic and genomic studies.
Immunology, Issue 83, Microdissection, whole genome amplification, malaria mosquito, polytene chromosome, mitotic chromosomes, fluorescence in situ hybridization, chromosome painting
Play Button
Live Imaging of Mitosis in the Developing Mouse Embryonic Cortex
Authors: Louis-Jan Pilaz, Debra L. Silver.
Institutions: Duke University Medical Center, Duke University Medical Center.
Although of short duration, mitosis is a complex and dynamic multi-step process fundamental for development of organs including the brain. In the developing cerebral cortex, abnormal mitosis of neural progenitors can cause defects in brain size and function. Hence, there is a critical need for tools to understand the mechanisms of neural progenitor mitosis. Cortical development in rodents is an outstanding model for studying this process. Neural progenitor mitosis is commonly examined in fixed brain sections. This protocol will describe in detail an approach for live imaging of mitosis in ex vivo embryonic brain slices. We will describe the critical steps for this procedure, which include: brain extraction, brain embedding, vibratome sectioning of brain slices, staining and culturing of slices, and time-lapse imaging. We will then demonstrate and describe in detail how to perform post-acquisition analysis of mitosis. We include representative results from this assay using the vital dye Syto11, transgenic mice (histone H2B-EGFP and centrin-EGFP), and in utero electroporation (mCherry-α-tubulin). We will discuss how this procedure can be best optimized and how it can be modified for study of genetic regulation of mitosis. Live imaging of mitosis in brain slices is a flexible approach to assess the impact of age, anatomy, and genetic perturbation in a controlled environment, and to generate a large amount of data with high temporal and spatial resolution. Hence this protocol will complement existing tools for analysis of neural progenitor mitosis.
Neuroscience, Issue 88, mitosis, radial glial cells, developing cortex, neural progenitors, brain slice, live imaging
Play Button
Reconstruction of 3-Dimensional Histology Volume and its Application to Study Mouse Mammary Glands
Authors: Rushin Shojaii, Stephanie Bacopulos, Wenyi Yang, Tigran Karavardanyan, Demetri Spyropoulos, Afshin Raouf, Anne Martel, Arun Seth.
Institutions: University of Toronto, Sunnybrook Research Institute, University of Toronto, Sunnybrook Research Institute, Medical University of South Carolina, University of Manitoba.
Histology volume reconstruction facilitates the study of 3D shape and volume change of an organ at the level of macrostructures made up of cells. It can also be used to investigate and validate novel techniques and algorithms in volumetric medical imaging and therapies. Creating 3D high-resolution atlases of different organs1,2,3 is another application of histology volume reconstruction. This provides a resource for investigating tissue structures and the spatial relationship between various cellular features. We present an image registration approach for histology volume reconstruction, which uses a set of optical blockface images. The reconstructed histology volume represents a reliable shape of the processed specimen with no propagated post-processing registration error. The Hematoxylin and Eosin (H&E) stained sections of two mouse mammary glands were registered to their corresponding blockface images using boundary points extracted from the edges of the specimen in histology and blockface images. The accuracy of the registration was visually evaluated. The alignment of the macrostructures of the mammary glands was also visually assessed at high resolution. This study delineates the different steps of this image registration pipeline, ranging from excision of the mammary gland through to 3D histology volume reconstruction. While 2D histology images reveal the structural differences between pairs of sections, 3D histology volume provides the ability to visualize the differences in shape and volume of the mammary glands.
Bioengineering, Issue 89, Histology Volume Reconstruction, Transgenic Mouse Model, Image Registration, Digital Histology, Image Processing, Mouse Mammary Gland
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Introduction to Solid Supported Membrane Based Electrophysiology
Authors: Andre Bazzone, Wagner Steuer Costa, Markus Braner, Octavian Călinescu, Lina Hatahet, Klaus Fendler.
Institutions: Max Planck Institute of Biophysics, Goethe University Frankfurt.
The electrophysiological method we present is based on a solid supported membrane (SSM) composed of an octadecanethiol layer chemisorbed on a gold coated sensor chip and a phosphatidylcholine monolayer on top. This assembly is mounted into a cuvette system containing the reference electrode, a chlorinated silver wire. After adsorption of membrane fragments or proteoliposomes containing the membrane protein of interest, a fast solution exchange is used to induce the transport activity of the membrane protein. In the single solution exchange protocol two solutions, one non-activating and one activating solution, are needed. The flow is controlled by pressurized air and a valve and tubing system within a faraday cage. The kinetics of the electrogenic transport activity is obtained via capacitive coupling between the SSM and the proteoliposomes or membrane fragments. The method, therefore, yields only transient currents. The peak current represents the stationary transport activity. The time dependent transporter currents can be reconstructed by circuit analysis. This method is especially suited for prokaryotic transporters or eukaryotic transporters from intracellular membranes, which cannot be investigated by patch clamp or voltage clamp methods.
Biochemistry, Issue 75, Biophysics, Molecular Biology, Cellular Biology, Physiology, Proteins, Membrane Lipids, Membrane Transport Proteins, Kinetics, Electrophysiology, solid supported membrane, SSM, membrane transporter, lactose permease, lacY, capacitive coupling, solution exchange, model membrane, membrane protein, transporter, kinetics, transport mechanism
Play Button
Chromosome Replicating Timing Combined with Fluorescent In situ Hybridization
Authors: Leslie Smith, Mathew Thayer.
Institutions: Oregon Health & Science University.
Mammalian DNA replication initiates at multiple sites along chromosomes at different times during S phase, following a temporal replication program. The specification of replication timing is thought to be a dynamic process regulated by tissue-specific and developmental cues that are responsive to epigenetic modifications. However, the mechanisms regulating where and when DNA replication initiates along chromosomes remains poorly understood. Homologous chromosomes usually replicate synchronously, however there are notable exceptions to this rule. For example, in female mammalian cells one of the two X chromosomes becomes late replicating through a process known as X inactivation1. Along with this delay in replication timing, estimated to be 2-3 hr, the majority of genes become transcriptionally silenced on one X chromosome. In addition, a discrete cis-acting locus, known as the X inactivation center, regulates this X inactivation process, including the induction of delayed replication timing on the entire inactive X chromosome. In addition, certain chromosome rearrangements found in cancer cells and in cells exposed to ionizing radiation display a significant delay in replication timing of >3 hours that affects the entire chromosome2,3. Recent work from our lab indicates that disruption of discrete cis-acting autosomal loci result in an extremely late replicating phenotype that affects the entire chromosome4. Additional 'chromosome engineering' studies indicate that certain chromosome rearrangements affecting many different chromosomes result in this abnormal replication-timing phenotype, suggesting that all mammalian chromosomes contain discrete cis-acting loci that control proper replication timing of individual chromosomes5. Here, we present a method for the quantitative analysis of chromosome replication timing combined with fluorescent in situ hybridization. This method allows for a direct comparison of replication timing between homologous chromosomes within the same cell, and was adapted from6. In addition, this method allows for the unambiguous identification of chromosomal rearrangements that correlate with changes in replication timing that affect the entire chromosome. This method has advantages over recently developed high throughput micro-array or sequencing protocols that cannot distinguish between homologous alleles present on rearranged and un-rearranged chromosomes. In addition, because the method described here evaluates single cells, it can detect changes in chromosome replication timing on chromosomal rearrangements that are present in only a fraction of the cells in a population.
Genetics, Issue 70, Biochemistry, Molecular Biology, Cellular Biology, Chromosome replication timing, fluorescent in situ hybridization, FISH, BrdU, cytogenetics, chromosome rearrangements, fluorescence microscopy
Play Button
Genomic MRI - a Public Resource for Studying Sequence Patterns within Genomic DNA
Authors: Ashwin Prakash, Jason Bechtel, Alexei Fedorov.
Institutions: University of Toledo Health Science Campus.
Non-coding genomic regions in complex eukaryotes, including intergenic areas, introns, and untranslated segments of exons, are profoundly non-random in their nucleotide composition and consist of a complex mosaic of sequence patterns. These patterns include so-called Mid-Range Inhomogeneity (MRI) regions -- sequences 30-10000 nucleotides in length that are enriched by a particular base or combination of bases (e.g. (G+T)-rich, purine-rich, etc.). MRI regions are associated with unusual (non-B-form) DNA structures that are often involved in regulation of gene expression, recombination, and other genetic processes (Fedorova & Fedorov 2010). The existence of a strong fixation bias within MRI regions against mutations that tend to reduce their sequence inhomogeneity additionally supports the functionality and importance of these genomic sequences (Prakash et al. 2009). Here we demonstrate a freely available Internet resource -- the Genomic MRI program package -- designed for computational analysis of genomic sequences in order to find and characterize various MRI patterns within them (Bechtel et al. 2008). This package also allows generation of randomized sequences with various properties and level of correspondence to the natural input DNA sequences. The main goal of this resource is to facilitate examination of vast regions of non-coding DNA that are still scarcely investigated and await thorough exploration and recognition.
Genetics, Issue 51, bioinformatics, computational biology, genomics, non-randomness, signals, gene regulation, DNA conformation
Play Button
Quantitation and Analysis of the Formation of HO-Endonuclease Stimulated Chromosomal Translocations by Single-Strand Annealing in Saccharomyces cerevisiae
Authors: Lauren Liddell, Glenn Manthey, Nicholas Pannunzio, Adam Bailis.
Institutions: Irell & Manella Graduate School of Biological Sciences, City of Hope Comprehensive Cancer Center and Beckman Research Institute, University of Southern California, Norris Comprehensive Cancer Center.
Genetic variation is frequently mediated by genomic rearrangements that arise through interaction between dispersed repetitive elements present in every eukaryotic genome. This process is an important mechanism for generating diversity between and within organisms1-3. The human genome consists of approximately 40% repetitive sequence of retrotransposon origin, including a variety of LINEs and SINEs4. Exchange events between these repetitive elements can lead to genome rearrangements, including translocations, that can disrupt gene dosage and expression that can result in autoimmune and cardiovascular diseases5, as well as cancer in humans6-9. Exchange between repetitive elements occurs in a variety of ways. Exchange between sequences that share perfect (or near-perfect) homology occurs by a process called homologous recombination (HR). By contrast, non-homologous end joining (NHEJ) uses little-or-no sequence homology for exchange10,11. The primary purpose of HR, in mitotic cells, is to repair double-strand breaks (DSBs) generated endogenously by aberrant DNA replication and oxidative lesions, or by exposure to ionizing radiation (IR), and other exogenous DNA damaging agents. In the assay described here, DSBs are simultaneously created bordering recombination substrates at two different chromosomal loci in diploid cells by a galactose-inducible HO-endonuclease (Figure 1). The repair of the broken chromosomes generates chromosomal translocations by single strand annealing (SSA), a process where homologous sequences adjacent to the chromosome ends are covalently joined subsequent to annealing. One of the substrates, his3-Δ3', contains a 3' truncated HIS3 allele and is located on one copy of chromosome XV at the native HIS3 locus. The second substrate, his3-Δ5', is located at the LEU2 locus on one copy of chromosome III, and contains a 5' truncated HIS3 allele. Both substrates are flanked by a HO endonuclease recognition site that can be targeted for incision by HO-endonuclease. HO endonuclease recognition sites native to the MAT locus, on both copies of chromosome III, have been deleted in all strains. This prevents interaction between the recombination substrates and other broken chromosome ends from interfering in the assay. The KAN-MX-marked galactose-inducible HO endonuclease expression cassette is inserted at the TRP1 locus on chromosome IV. The substrates share 311 bp or 60 bp of the HIS3 coding sequence that can be used by the HR machinery for repair by SSA. Cells that use these substrates to repair broken chromosomes by HR form an intact HIS3 allele and a tXV::III chromosomal translocation that can be selected for by the ability to grow on medium lacking histidine (Figure 2A). Translocation frequency by HR is calculated by dividing the number of histidine prototrophic colonies that arise on selective medium by the total number of viable cells that arise after plating appropriate dilutions onto non-selective medium (Figure 2B). A variety of DNA repair mutants have been used to study the genetic control of translocation formation by SSA using this system12-14.
Genetics, Issue 55, translocation formation, HO-endonuclease, Genomic Southern blot, Chromosome blot, Pulsed-field gel electrophoresis, Homologous recombination, DNA double-strand breaks, Single-strand annealing
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Play Button
High-throughput Physical Mapping of Chromosomes using Automated in situ Hybridization
Authors: Phillip George, Maria V. Sharakhova, Igor V. Sharakhov.
Institutions: Virginia Tech.
Projects to obtain whole-genome sequences for 10,000 vertebrate species1 and for 5,000 insect and related arthropod species2 are expected to take place over the next 5 years. For example, the sequencing of the genomes for 15 malaria mosquitospecies is currently being done using an Illumina platform3,4. This Anopheles species cluster includes both vectors and non-vectors of malaria. When the genome assemblies become available, researchers will have the unique opportunity to perform comparative analysis for inferring evolutionary changes relevant to vector ability. However, it has proven difficult to use next-generation sequencing reads to generate high-quality de novo genome assemblies5. Moreover, the existing genome assemblies for Anopheles gambiae, although obtained using the Sanger method, are gapped or fragmented4,6. Success of comparative genomic analyses will be limited if researchers deal with numerous sequencing contigs, rather than with chromosome-based genome assemblies. Fragmented, unmapped sequences create problems for genomic analyses because: (i) unidentified gaps cause incorrect or incomplete annotation of genomic sequences; (ii) unmapped sequences lead to confusion between paralogous genes and genes from different haplotypes; and (iii) the lack of chromosome assignment and orientation of the sequencing contigs does not allow for reconstructing rearrangement phylogeny and studying chromosome evolution. Developing high-resolution physical maps for species with newly sequenced genomes is a timely and cost-effective investment that will facilitate genome annotation, evolutionary analysis, and re-sequencing of individual genomes from natural populations7,8. Here, we present innovative approaches to chromosome preparation, fluorescent in situ hybridization (FISH), and imaging that facilitate rapid development of physical maps. Using An. gambiae as an example, we demonstrate that the development of physical chromosome maps can potentially improve genome assemblies and, thus, the quality of genomic analyses. First, we use a high-pressure method to prepare polytene chromosome spreads. This method, originally developed for Drosophila9, allows the user to visualize more details on chromosomes than the regular squashing technique10. Second, a fully automated, front-end system for FISH is used for high-throughput physical genome mapping. The automated slide staining system runs multiple assays simultaneously and dramatically reduces hands-on time11. Third, an automatic fluorescent imaging system, which includes a motorized slide stage, automatically scans and photographs labeled chromosomes after FISH12. This system is especially useful for identifying and visualizing multiple chromosomal plates on the same slide. In addition, the scanning process captures a more uniform FISH result. Overall, the automated high-throughput physical mapping protocol is more efficient than a standard manual protocol.
Genetics, Issue 64, Entomology, Molecular Biology, Genomics, automation, chromosome, genome, hybridization, labeling, mapping, mosquito
Play Button
Collection Protocol for Human Pancreas
Authors: Martha L. Campbell-Thompson, Emily L. Montgomery, Robin M. Foss, Kerwin M. Kolheffer, Gerald Phipps, Lynda Schneider, Mark A. Atkinson.
Institutions: University of Florida .
This dissection and sampling procedure was developed for the Network for Pancreatic Organ Donors with Diabetes (nPOD) program to standardize preparation of pancreas recovered from cadaveric organ donors. The pancreas is divided into 3 main regions (head, body, tail) followed by serial transverse sections throughout the medial to lateral axis. Alternating sections are used for fixed paraffin and fresh frozen blocks and remnant samples are minced for snap frozen sample preparations, either with or without RNAse inhibitors, for DNA, RNA, or protein isolation. The overall goal of the pancreas dissection procedure is to sample the entire pancreas while maintaining anatomical orientation. Endocrine cell heterogeneity in terms of islet composition, size, and numbers is reported for human islets compared to rodent islets 1. The majority of human islets from the pancreas head, body and tail regions are composed of insulin-containing β-cells followed by lower proportions of glucagon-containing α-cells and somatostatin-containing δ-cells. Pancreatic polypeptide-containing PP cells and ghrelin-containing epsilon cells are also present but in small numbers. In contrast, the uncinate region contains islets that are primarily composed of pancreatic polypeptide-containing PP cells 2. These regional islet variations arise from developmental differences. The pancreas develops from the ventral and dorsal pancreatic buds in the foregut and after rotation of the stomach and duodenum, the ventral lobe moves and fuses with the dorsal 3. The ventral lobe forms the posterior portion of the head including the uncinate process while the dorsal lobe gives rise to the rest of the organ. Regional pancreatic variation is also reported with the tail region having higher islet density compared to other regions and the dorsal lobe-derived components undergoing selective atrophy in type 1 diabetes4,5. Additional organs and tissues are often recovered from the organ donors and include pancreatic lymph nodes, spleen and non-pancreatic lymph nodes. These samples are recovered with similar formats as for the pancreas with the addition of isolation of cryopreserved cells. When the proximal duodenum is included with the pancreas, duodenal mucosa may be collected for paraffin and frozen blocks and minced snap frozen preparations.
Medicine, Issue 63, Physiology, pancreas, organ donor, endocrine cells, insulin, beta-cells, islet, type 1 diabetes, type 2 diabetes
Play Button
Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays
Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili.
Institutions: University of Toronto, University of Toronto, University of Regina.
Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9.
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression
Play Button
Fluorescent in situ Hybridization on Mitotic Chromosomes of Mosquitoes
Authors: Vladimir A. Timoshevskiy, Atashi Sharma, Igor V. Sharakhov, Maria V. Sharakhova.
Institutions: Virginia Tech.
Fluorescent in situ hybridization (FISH) is a technique routinely used by many laboratories to determine the chromosomal position of DNA and RNA probes. One important application of this method is the development of high-quality physical maps useful for improving the genome assemblies for various organisms. The natural banding pattern of polytene and mitotic chromosomes provides guidance for the precise ordering and orientation of the genomic supercontigs. Among the three mosquito genera, namely Anopheles, Aedes, and Culex, a well-established chromosome-based mapping technique has been developed only for Anopheles, whose members possess readable polytene chromosomes 1. As a result of genome mapping efforts, 88% of the An. gambiae genome has been placed to precise chromosome positions 2,3 . Two other mosquito genera, Aedes and Culex, have poorly polytenized chromosomes because of significant overrepresentation of transposable elements in their genomes 4, 5, 6. Only 31 and 9% of the genomic supercontings have been assigned without order or orientation to chromosomes of Ae. aegypti 7 and Cx. quinquefasciatus 8, respectively. Mitotic chromosome preparation for these two species had previously been limited to brain ganglia and cell lines. However, chromosome slides prepared from the brain ganglia of mosquitoes usually contain low numbers of metaphase plates 9. Also, although a FISH technique has been developed for mitotic chromosomes from a cell line of Ae. aegypti 10, the accumulation of multiple chromosomal rearrangements in cell line chromosomes 11 makes them useless for genome mapping. Here we describe a simple, robust technique for obtaining high-quality mitotic chromosome preparations from imaginal discs (IDs) of 4th instar larvae which can be used for all three genera of mosquitoes. A standard FISH protocol 12 is optimized for using BAC clones of genomic DNA as a probe on mitotic chromosomes of Ae. aegypti and Cx. quinquefasciatus, and for utilizing an intergenic spacer (IGS) region of ribosomal DNA (rDNA) as a probe on An. gambiae chromosomes. In addition to physical mapping, the developed technique can be applied to population cytogenetics and chromosome taxonomy/systematics of mosquitoes and other insect groups.
Immunology, Issue 67, Genetics, Molecular Biology, Entomology, Infectious Disease, imaginal discs, mitotic chromosomes, genome mapping, FISH, fluorescent in situ hybridization, mosquitoes, Anopheles, Aedes, Culex
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
X-ray Dose Reduction through Adaptive Exposure in Fluoroscopic Imaging
Authors: Steve Burion, Tobias Funk.
Institutions: Triple Ring Technologies.
X-ray fluoroscopy is widely used for image guidance during cardiac intervention. However, radiation dose in these procedures can be high, and this is a significant concern, particularly in pediatric applications. Pediatrics procedures are in general much more complex than those performed on adults and thus are on average four to eight times longer1. Furthermore, children can undergo up to 10 fluoroscopic procedures by the age of 10, and have been shown to have a three-fold higher risk of developing fatal cancer throughout their life than the general population2,3. We have shown that radiation dose can be significantly reduced in adult cardiac procedures by using our scanning beam digital x-ray (SBDX) system4-- a fluoroscopic imaging system that employs an inverse imaging geometry5,6 (Figure 1, Movie 1 and Figure 2). Instead of a single focal spot and an extended detector as used in conventional systems, our approach utilizes an extended X-ray source with multiple focal spots focused on a small detector. Our X-ray source consists of a scanning electron beam sequentially illuminating up to 9,000 focal spot positions. Each focal spot projects a small portion of the imaging volume onto the detector. In contrast to a conventional system where the final image is directly projected onto the detector, the SBDX uses a dedicated algorithm to reconstruct the final image from the 9,000 detector images. For pediatric applications, dose savings with the SBDX system are expected to be smaller than in adult procedures. However, the SBDX system allows for additional dose savings by implementing an electronic adaptive exposure technique. Key to this method is the multi-beam scanning technique of the SBDX system: rather than exposing every part of the image with the same radiation dose, we can dynamically vary the exposure depending on the opacity of the region exposed. Therefore, we can significantly reduce exposure in radiolucent areas and maintain exposure in more opaque regions. In our current implementation, the adaptive exposure requires user interaction (Figure 3). However, in the future, the adaptive exposure will be real time and fully automatic. We have performed experiments with an anthropomorphic phantom and compared measured radiation dose with and without adaptive exposure using a dose area product (DAP) meter. In the experiment presented here, we find a dose reduction of 30%.
Bioengineering, Issue 55, Scanning digital X-ray, fluoroscopy, pediatrics, interventional cardiology, adaptive exposure, dose savings
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.