JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Determining frequent patterns of copy number alterations in cancer.
PLoS ONE
PUBLISHED: 04-27-2010
Cancer progression is often driven by an accumulation of genetic changes but also accompanied by increasing genomic instability. These processes lead to a complicated landscape of copy number alterations (CNAs) within individual tumors and great diversity across tumor samples. High resolution array-based comparative genomic hybridization (aCGH) is being used to profile CNAs of ever larger tumor collections, and better computational methods for processing these data sets and identifying potential driver CNAs are needed. Typical studies of aCGH data sets take a pipeline approach, starting with segmentation of profiles, calls of gains and losses, and finally determination of frequent CNAs across samples. A drawback of pipelines is that choices at each step may produce different results, and biases are propagated forward. We present a mathematically robust new method that exploits probe-level correlations in aCGH data to discover subsets of samples that display common CNAs. Our algorithm is related to recent work on maximum-margin clustering. It does not require pre-segmentation of the data and also provides grouping of recurrent CNAs into clusters. We tested our approach on a large cohort of glioblastoma aCGH samples from The Cancer Genome Atlas and recovered almost all CNAs reported in the initial study. We also found additional significant CNAs missed by the original analysis but supported by earlier studies, and we identified significant correlations between CNAs.
Authors: Helen H Won, Sasinya N Scott, A. Rose Brannon, Ronak H Shah, Michael F Berger.
Published: 10-18-2013
ABSTRACT
Efforts to detect and investigate key oncogenic mutations have proven valuable to facilitate the appropriate treatment for cancer patients. The establishment of high-throughput, massively parallel "next-generation" sequencing has aided the discovery of many such mutations. To enhance the clinical and translational utility of this technology, platforms must be high-throughput, cost-effective, and compatible with formalin-fixed paraffin embedded (FFPE) tissue samples that may yield small amounts of degraded or damaged DNA. Here, we describe the preparation of barcoded and multiplexed DNA libraries followed by hybridization-based capture of targeted exons for the detection of cancer-associated mutations in fresh frozen and FFPE tumors by massively parallel sequencing. This method enables the identification of sequence mutations, copy number alterations, and select structural rearrangements involving all targeted genes. Targeted exon sequencing offers the benefits of high throughput, low cost, and deep sequence coverage, thus conferring high sensitivity for detecting low frequency mutations.
25 Related JoVE Articles!
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Rapid Analysis and Exploration of Fluorescence Microscopy Images
Authors: Benjamin Pavie, Satwik Rajaram, Austin Ouyang, Jason M. Altschuler, Robert J. Steininger III, Lani F. Wu, Steven J. Altschuler.
Institutions: UT Southwestern Medical Center, UT Southwestern Medical Center, Princeton University.
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
Basic Protocol, Issue 85, PhenoRipper, fluorescence microscopy, image analysis, High-content analysis, high-throughput screening, Open-source, Phenotype
51280
Play Button
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Authors: Wenjin Chen, Chung Wong, Evan Vosburgh, Arnold J. Levine, David J. Foran, Eugenia Y. Xu.
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
51639
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Quantitation and Analysis of the Formation of HO-Endonuclease Stimulated Chromosomal Translocations by Single-Strand Annealing in Saccharomyces cerevisiae
Authors: Lauren Liddell, Glenn Manthey, Nicholas Pannunzio, Adam Bailis.
Institutions: Irell & Manella Graduate School of Biological Sciences, City of Hope Comprehensive Cancer Center and Beckman Research Institute, University of Southern California, Norris Comprehensive Cancer Center.
Genetic variation is frequently mediated by genomic rearrangements that arise through interaction between dispersed repetitive elements present in every eukaryotic genome. This process is an important mechanism for generating diversity between and within organisms1-3. The human genome consists of approximately 40% repetitive sequence of retrotransposon origin, including a variety of LINEs and SINEs4. Exchange events between these repetitive elements can lead to genome rearrangements, including translocations, that can disrupt gene dosage and expression that can result in autoimmune and cardiovascular diseases5, as well as cancer in humans6-9. Exchange between repetitive elements occurs in a variety of ways. Exchange between sequences that share perfect (or near-perfect) homology occurs by a process called homologous recombination (HR). By contrast, non-homologous end joining (NHEJ) uses little-or-no sequence homology for exchange10,11. The primary purpose of HR, in mitotic cells, is to repair double-strand breaks (DSBs) generated endogenously by aberrant DNA replication and oxidative lesions, or by exposure to ionizing radiation (IR), and other exogenous DNA damaging agents. In the assay described here, DSBs are simultaneously created bordering recombination substrates at two different chromosomal loci in diploid cells by a galactose-inducible HO-endonuclease (Figure 1). The repair of the broken chromosomes generates chromosomal translocations by single strand annealing (SSA), a process where homologous sequences adjacent to the chromosome ends are covalently joined subsequent to annealing. One of the substrates, his3-Δ3', contains a 3' truncated HIS3 allele and is located on one copy of chromosome XV at the native HIS3 locus. The second substrate, his3-Δ5', is located at the LEU2 locus on one copy of chromosome III, and contains a 5' truncated HIS3 allele. Both substrates are flanked by a HO endonuclease recognition site that can be targeted for incision by HO-endonuclease. HO endonuclease recognition sites native to the MAT locus, on both copies of chromosome III, have been deleted in all strains. This prevents interaction between the recombination substrates and other broken chromosome ends from interfering in the assay. The KAN-MX-marked galactose-inducible HO endonuclease expression cassette is inserted at the TRP1 locus on chromosome IV. The substrates share 311 bp or 60 bp of the HIS3 coding sequence that can be used by the HR machinery for repair by SSA. Cells that use these substrates to repair broken chromosomes by HR form an intact HIS3 allele and a tXV::III chromosomal translocation that can be selected for by the ability to grow on medium lacking histidine (Figure 2A). Translocation frequency by HR is calculated by dividing the number of histidine prototrophic colonies that arise on selective medium by the total number of viable cells that arise after plating appropriate dilutions onto non-selective medium (Figure 2B). A variety of DNA repair mutants have been used to study the genetic control of translocation formation by SSA using this system12-14.
Genetics, Issue 55, translocation formation, HO-endonuclease, Genomic Southern blot, Chromosome blot, Pulsed-field gel electrophoresis, Homologous recombination, DNA double-strand breaks, Single-strand annealing
3150
Play Button
Transferring Cognitive Tasks Between Brain Imaging Modalities: Implications for Task Design and Results Interpretation in fMRI Studies
Authors: Tracy Warbrick, Martina Reske, N. Jon Shah.
Institutions: Research Centre Jülich GmbH, Research Centre Jülich GmbH.
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
Behavior, Issue 91, fMRI, task design, data interpretation, cognitive neuroscience, visual oddball task, target detection
51793
Play Button
Telomere Length and Telomerase Activity; A Yin and Yang of Cell Senescence
Authors: Mary Derasmo Axelrad, Temuri Budagov, Gil Atzmon.
Institutions: Albert Einstein College of Medicine , Albert Einstein College of Medicine , Albert Einstein College of Medicine .
Telomeres are repeating DNA sequences at the tip ends of the chromosomes that are diverse in length and in humans can reach a length of 15,000 base pairs. The telomere serves as a bioprotective mechanism of chromosome attrition at each cell division. At a certain length, telomeres become too short to allow replication, a process that may lead to chromosome instability or cell death. Telomere length is regulated by two opposing mechanisms: attrition and elongation. Attrition occurs as each cell divides. In contrast, elongation is partially modulated by the enzyme telomerase, which adds repeating sequences to the ends of the chromosomes. In this way, telomerase could possibly reverse an aging mechanism and rejuvenates cell viability. These are crucial elements in maintaining cell life and are used to assess cellular aging. In this manuscript we will describe an accurate, short, sophisticated and cheap method to assess telomere length in multiple tissues and species. This method takes advantage of two key elements, the tandem repeat of the telomere sequence and the sensitivity of the qRT-PCR to detect differential copy numbers of tested samples. In addition, we will describe a simple assay to assess telomerase activity as a complementary backbone test for telomere length.
Genetics, Issue 75, Molecular Biology, Cellular Biology, Medicine, Biomedical Engineering, Genomics, Telomere length, telomerase activity, telomerase, telomeres, telomere, DNA, PCR, polymerase chain reaction, qRT-PCR, sequencing, aging, telomerase assay
50246
Play Button
Time-lapse Imaging of Primary Preneoplastic Mammary Epithelial Cells Derived from Genetically Engineered Mouse Models of Breast Cancer
Authors: Rebecca E. Nakles, Sarah L. Millman, M. Carla Cabrera, Peter Johnson, Susette Mueller, Philipp S. Hoppe, Timm Schroeder, Priscilla A. Furth.
Institutions: Georgetown University, Georgetown University, Helmholtz Zentrum München - German Research Center for Environmental Health, Georgetown University, Dankook University.
Time-lapse imaging can be used to compare behavior of cultured primary preneoplastic mammary epithelial cells derived from different genetically engineered mouse models of breast cancer. For example, time between cell divisions (cell lifetimes), apoptotic cell numbers, evolution of morphological changes, and mechanism of colony formation can be quantified and compared in cells carrying specific genetic lesions. Primary mammary epithelial cell cultures are generated from mammary glands without palpable tumor. Glands are carefully resected with clear separation from adjacent muscle, lymph nodes are removed, and single-cell suspensions of enriched mammary epithelial cells are generated by mincing mammary tissue followed by enzymatic dissociation and filtration. Single-cell suspensions are plated and placed directly under a microscope within an incubator chamber for live-cell imaging. Sixteen 650 μm x 700 μm fields in a 4x4 configuration from each well of a 6-well plate are imaged every 15 min for 5 days. Time-lapse images are examined directly to measure cellular behaviors that can include mechanism and frequency of cell colony formation within the first 24 hr of plating the cells (aggregation versus cell proliferation), incidence of apoptosis, and phasing of morphological changes. Single-cell tracking is used to generate cell fate maps for measurement of individual cell lifetimes and investigation of cell division patterns. Quantitative data are statistically analyzed to assess for significant differences in behavior correlated with specific genetic lesions.
Cancer Biology, Issue 72, Medicine, Cellular Biology, Molecular Biology, Anatomy, Physiology, Oncology, Mammary Glands, Animal, Epithelial Cells, Mice, Genetically Modified, Primary Cell Culture, Time-Lapse Imaging, Early Detection of Cancer, Models, Genetic, primary cell culture, preneoplastic mammary epithelial cells, genetically engineered mice, time-lapse imaging, BRCA1, animal model
50198
Play Button
Amplifying and Quantifying HIV-1 RNA in HIV Infected Individuals with Viral Loads Below the Limit of Detection by Standard Clinical Assays
Authors: Helene Mens, Mary Kearney, Ann Wiegand, Jonathan Spindler, Frank Maldarelli, John W. Mellors, John M. Coffin.
Institutions: NCI-Frederick, University of Pittsburgh, Tuffts University.
Amplifying viral genes and quantifying HIV-1 RNA in HIV-1 infected individuals with viral loads below the limit of detection by standard assays (below 50-75 copies/ml) is necessary to gain insight to viral dynamics and virus host interactions in patients who naturally control the infection and those who are on combination antiretroviral therapy (cART). Here we describe how to amplify viral genomes by single genome sequencing (the SGS protocol) 13, 19 and how to accurately quantify HIV-1 RNA in patients with low viral loads (the single-copy assay (SCA) protocol) 12, 20. The single-copy assay is a real-time PCR assay with sensitivity depending on the volume of plasma being assayed. If a single virus genome is detected in 7 ml of plasma, then the RNA copy number is reported to be 0.3 copies/ml. The assay has an internal control testing for the efficiency of RNA extraction, and controls for possible amplification from DNA or contamination. Patient samples are measured in triplicate. The single-genome sequencing assay (SGS), now widely used and considered to be non-labor intensive 3, 7, 12, 14, 15,is a limiting dilution assay, in which endpoint diluted cDNA product is spread over a 96-well plate. According to a Poisson distribution, when less than 1/3 of the wells give product, there is an 80% chance of the PCR product being resultant of amplification from a single cDNA molecule. SGS has the advantage over cloning of not being subjected to resampling and not being biased by PCR-introduced recombination 19. However, the amplification success of SCA and SGS depend on primer design. Both assays were developed for HIV-1 subtype B, but can be adapted for other subtypes and other regions of the genome by changing primers, probes, and standards.
Immunology, Issue 55, single genome sequencing, SGS, real-time PCR, single-copy assay, SCA, HIV-1, ultra-sensitive, RNA extraction
2960
Play Button
DNA Extraction from Paraffin Embedded Material for Genetic and Epigenetic Analyses
Authors: Larissa A. Pikor, Katey S. S. Enfield, Heryet Cameron, Wan L. Lam.
Institutions: BC Cancer Research Centre, University of British Columbia - UBC, BC Cancer Agency, University of British Columbia - UBC.
Disease development and progression are characterized by frequent genetic and epigenetic aberrations including chromosomal rearrangements, copy number gains and losses and DNA methylation. Advances in high-throughput, genome-wide profiling technologies, such as microarrays, have significantly improved our ability to identify and detect these specific alterations. However as technology continues to improve, a limiting factor remains sample quality and availability. Furthermore, follow-up clinical information and disease outcome are often collected years after the initial specimen collection. Specimens, typically formalin-fixed and paraffin embedded (FFPE), are stored in hospital archives for years to decades. DNA can be efficiently and effectively recovered from paraffin-embedded specimens if the appropriate method of extraction is applied. High quality DNA extracted from properly preserved and stored specimens can support quantitative assays for comparisons of normal and diseased tissues and generation of genetic and epigenetic signatures 1. To extract DNA from paraffin-embedded samples, tissue cores or microdissected tissue are subjected to xylene treatment, which dissolves the paraffin from the tissue, and then rehydrated using a series of ethanol washes. Proteins and harmful enzymes such as nucleases are subsequently digested by proteinase K. The addition of lysis buffer, which contains denaturing agents such as sodium dodecyl sulfate (SDS), facilitates digestion 2. Nucleic acids are purified from the tissue lysate using buffer-saturated phenol and high speed centrifugation which generates a biphasic solution. DNA and RNA remain in the upper aqueous phase, while proteins, lipids and polysaccharides are sequestered in the inter- and organic-phases respectively. Retention of the aqueous phase and repeated phenol extractions generates a clean sample. Following phenol extractions, RNase A is added to eliminate contaminating RNA. Additional phenol extractions following incubation with RNase A are used to remove any remaining enzyme. The addition of sodium acetate and isopropanol precipitates DNA, and high speed centrifugation is used to pellet the DNA and facilitate isopropanol removal. Excess salts carried over from precipitation can interfere with subsequent enzymatic assays, but can be removed from the DNA by washing with 70% ethanol, followed by centrifugation to re-pellet the DNA 3. DNA is re-suspended in distilled water or the buffer of choice, quantified and stored at -20°C. Purified DNA can subsequently be used in downstream applications which include, but are not limited to, PCR, array comparative genomic hybridization 4 (array CGH), methylated DNA Immunoprecipitation (MeDIP) and sequencing, allowing for an integrative analysis of tissue/tumor samples.
Genetics, Issue 49, DNA extraction, paraffin embedded tissue, phenol:chloroform extraction, genetic analysis, epigenetic analysis
2763
Play Button
gDNA Enrichment by a Transposase-based Technology for NGS Analysis of the Whole Sequence of BRCA1, BRCA2, and 9 Genes Involved in DNA Damage Repair
Authors: Sandy Chevrier, Romain Boidot.
Institutions: Centre Georges-François Leclerc.
The widespread use of Next Generation Sequencing has opened up new avenues for cancer research and diagnosis. NGS will bring huge amounts of new data on cancer, and especially cancer genetics. Current knowledge and future discoveries will make it necessary to study a huge number of genes that could be involved in a genetic predisposition to cancer. In this regard, we developed a Nextera design to study 11 complete genes involved in DNA damage repair. This protocol was developed to safely study 11 genes (ATM, BARD1, BRCA1, BRCA2, BRIP1, CHEK2, PALB2, RAD50, RAD51C, RAD80, and TP53) from promoter to 3'-UTR in 24 patients simultaneously. This protocol, based on transposase technology and gDNA enrichment, gives a great advantage in terms of time for the genetic diagnosis thanks to sample multiplexing. This protocol can be safely used with blood gDNA.
Genetics, Issue 92, gDNA enrichment, Nextera, NGS, DNA damage, BRCA1, BRCA2
51902
Play Button
Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays
Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili.
Institutions: University of Toronto, University of Toronto, University of Regina.
Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9.
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression
4056
Play Button
Microarray-based Identification of Individual HERV Loci Expression: Application to Biomarker Discovery in Prostate Cancer
Authors: Philippe Pérot, Valérie Cheynet, Myriam Decaussin-Petrucci, Guy Oriol, Nathalie Mugnier, Claire Rodriguez-Lafrasse, Alain Ruffion, François Mallet.
Institutions: Joint Unit Hospices de Lyon-bioMérieux, BioMérieux, Hospices Civils de Lyon, Lyon 1 University, BioMérieux, Hospices Civils de Lyon, Hospices Civils de Lyon.
The prostate-specific antigen (PSA) is the main diagnostic biomarker for prostate cancer in clinical use, but it lacks specificity and sensitivity, particularly in low dosage values1​​. ‘How to use PSA' remains a current issue, either for diagnosis as a gray zone corresponding to a concentration in serum of 2.5-10 ng/ml which does not allow a clear differentiation to be made between cancer and noncancer2 or for patient follow-up as analysis of post-operative PSA kinetic parameters can pose considerable challenges for their practical application3,4. Alternatively, noncoding RNAs (ncRNAs) are emerging as key molecules in human cancer, with the potential to serve as novel markers of disease, e.g. PCA3 in prostate cancer5,6 and to reveal uncharacterized aspects of tumor biology. Moreover, data from the ENCODE project published in 2012 showed that different RNA types cover about 62% of the genome. It also appears that the amount of transcriptional regulatory motifs is at least 4.5x higher than the one corresponding to protein-coding exons. Thus, long terminal repeats (LTRs) of human endogenous retroviruses (HERVs) constitute a wide range of putative/candidate transcriptional regulatory sequences, as it is their primary function in infectious retroviruses. HERVs, which are spread throughout the human genome, originate from ancestral and independent infections within the germ line, followed by copy-paste propagation processes and leading to multicopy families occupying 8% of the human genome (note that exons span 2% of our genome). Some HERV loci still express proteins that have been associated with several pathologies including cancer7-10. We have designed a high-density microarray, in Affymetrix format, aiming to optimally characterize individual HERV loci expression, in order to better understand whether they can be active, if they drive ncRNA transcription or modulate coding gene expression. This tool has been applied in the prostate cancer field (Figure 1).
Medicine, Issue 81, Cancer Biology, Genetics, Molecular Biology, Prostate, Retroviridae, Biomarkers, Pharmacological, Tumor Markers, Biological, Prostatectomy, Microarray Analysis, Gene Expression, Diagnosis, Human Endogenous Retroviruses, HERV, microarray, Transcriptome, prostate cancer, Affymetrix
50713
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice
Authors: Robert S. McNeill, Ralf S. Schmid, Ryan E. Bash, Mark Vitucci, Kristen K. White, Andrea M. Werneke, Brian H. Constance, Byron Huff, C. Ryan Miller.
Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine.
Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro. Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo. Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro and in vivo and may be useful in preclinical drug development for these devastating diseases.
Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft
51763
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Authors: Melissa N. Patterson, Patrick H. Maxwell.
Institutions: Rensselaer Polytechnic Institute.
Saccharomyces cerevisiae has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
51850
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Technical Demonstration of Whole Genome Array Comparative Genomic Hybridization
Authors: Jennifer Y. Kennett, Spencer K. Watson, Heather Saprunoff, Cameron Heryet, Wan L. Lam.
Institutions: BC Cancer Research Centre, BC Cancer Agency, BC Cancer Agency.
Array comparative genomic hybridization (array CGH) is a method for detecting gains and losses of DNA segments or gene dosage in the genome 1. Recent advances in this technology have enabled high resolution comparison of whole genomes for the identification of genetic alterations in cancer and other genetic diseases 2. The Sub-Megabase Resolution Tiling-set array (or SMRT) array is comprised of a set of approximately thirty thousand overlapping bacterial artificial chromosome (BAC) clones that span the human genome in ~100 kilobase pair (kb) segments 2. These BAC targets are individually synthesized and spotted in duplicate on a single glass slide 2-4. Array CGH is based on the principle of competitive hybridization. Sample and reference DNA are differentially labeled with Cyanine-3 and Cyanine-5 fluorescent dyes, and co-hybridized to the array. After an incubation period the unbound samples are washed from the slide and the array is imaged. A freely available custom software package called SeeGH (www.flintbox.ca) is used to process the large volume of data collected - a single experiment generates 53,892 data points. SeeGH visualizes the log2 signal intensity ratio between the 2 samples at each BAC target which is vertically aligned with chromosomal position 5,6. The SMRT array can detect alterations as small as 50 kb in size 7. The SMRT array can detect a variety of DNA rearrangement events including DNA gains, losses, amplifications and homozygous deletions. A unique advantage of the SMRT array is that one can use DNA isolated from formalin fixed paraffin embedded samples. When combined with the low input requirements of unamplified DNA (25-100ng) this allows profiling of precious samples such as those produced by microdissection 7,8. This is attributed to the large size of each BAC hybridization target that allows the binding of sufficient labeled samples to produce signals for detection. Another advantage of this platform is the tolerance of tissue heterogeneity, decreasing the need for tedious tissue microdissection 8. This video protocol is a step-by-step tutorial from labeling the input DNA through to signal acquisition for the whole genome tiling path SMRT array.
Cellular Biology, Issue 18, Genomics, array comparative genomic hybridization, aCGH, microarray, DNA profile, genetic signature
870
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
In vivo Bioluminescent Imaging of Mammary Tumors Using IVIS Spectrum
Authors: Ed Lim, Kshitij D Modi, JaeBeom Kim.
Institutions: Caliper Life Sciences.
4T1 mouse mammary tumor cells can be implanted sub-cutaneously in nu/nu mice to form palpable tumors in 15 to 20 days. This xenograft tumor model system is valuable for the pre-clinical in vivo evaluation of putative antitumor compounds. The 4T1 cell line has been engineered to constitutively express the firefly luciferase gene (luc2). When mice carrying 4T1-luc2 tumors are injected with Luciferin the tumors emit a visual light signal that can be monitored using a sensitive optical imaging system like the IVIS Spectrum. The photon flux from the tumor is proportional to the number of light emitting cells and the signal can be measured to monitor tumor growth and development. IVIS is calibrated to enable absolute quantitation of the bioluminescent signal and longitudinal studies can be performed over many months and over several orders of signal magnitude without compromising the quantitative result. Tumor growth can be monitored for several days by bioluminescence before the tumor size becomes palpable or measurable by traditional physical means. This rapid monitoring can provide insight into early events in tumor development or lead to shorter experimental procedures. Tumor cell death and necrosis due to hypoxia or drug treatment is indicated early by a reduction in the bioluminescent signal. This cell death might not be accompanied by a reduction in tumor size as measured by physical means. The ability to see early events in tumor necrosis has significant impact on the selection and development of therapeutic agents. Quantitative imaging of tumor growth using IVIS provides precise quantitation and accelerates the experimental process to generate results.
Cellular Biology, Issue 26, tumor, mammary, mouse, bioluminescence, in vivo, imaging, IVIS, luciferase, luciferin
1210
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Monitoring Tumor Metastases and Osteolytic Lesions with Bioluminescence and Micro CT Imaging
Authors: Ed Lim, Kshitij Modi, Anna Christensen, Jeff Meganck, Stephen Oldfield, Ning Zhang.
Institutions: Caliper Life Sciences.
Following intracardiac delivery of MDA-MB-231-luc-D3H2LN cells to Nu/Nu mice, systemic metastases developed in the injected animals. Bioluminescence imaging using IVIS Spectrum was employed to monitor the distribution and development of the tumor cells following the delivery procedure including DLIT reconstruction to measure the tumor signal and its location. Development of metastatic lesions to the bone tissues triggers osteolytic activity and lesions to tibia and femur were evaluated longitudinally using micro CT. Imaging was performed using a Quantum FX micro CT system with fast imaging and low X-ray dose. The low radiation dose allows multiple imaging sessions to be performed with a cumulative X-ray dosage far below LD50. A mouse imaging shuttle device was used to sequentially image the mice with both IVIS Spectrum and Quantum FX achieving accurate animal positioning in both the bioluminescence and CT images. The optical and CT data sets were co-registered in 3-dimentions using the Living Image 4.1 software. This multi-mode approach allows close monitoring of tumor growth and development simultaneously with osteolytic activity.
Medicine, Issue 50, osteolytic lesions, micro CT, tumor, bioluminescence, in vivo, imaging, IVIS, luciferase, low dose, co-registration, 3D reconstruction
2775
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.