JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
SNP set association analysis for genome-wide association studies.
PLoS ONE
PUBLISHED: 01-01-2013
Genome-wide association study (GWAS) is a promising approach for identifying common genetic variants of the diseases on the basis of millions of single nucleotide polymorphisms (SNPs). In order to avoid low power caused by overmuch correction for multiple comparisons in single locus association study, some methods have been proposed by grouping SNPs together into a SNP set based on genomic features, then testing the joint effect of the SNP set. We compare the performances of principal component analysis (PCA), supervised principal component analysis (SPCA), kernel principal component analysis (KPCA), and sliced inverse regression (SIR). Simulated SNP sets are generated under scenarios of 0, 1 and ? 2 causal SNPs model. Our simulation results show that all of these methods can control the type I error at the nominal significance level. SPCA is always more powerful than the other methods at different settings of linkage disequilibrium structures and minor allele frequency of the simulated datasets. We also apply these four methods to a real GWAS of non-small cell lung cancer (NSCLC) in Han Chinese population.
Authors: Adam J. Adler, Graham B. Wiley, Patrick M. Gaffney.
Published: 11-19-2013
ABSTRACT
Genotyping variants in the human genome has proven to be an efficient method to identify genetic associations with phenotypes. The distribution of variants within families or populations can facilitate identification of the genetic factors of disease. Illumina's panel of genotyping BeadChips allows investigators to genotype thousands or millions of single nucleotide polymorphisms (SNPs) or to analyze other genomic variants, such as copy number, across a large number of DNA samples. These SNPs can be spread throughout the genome or targeted in specific regions in order to maximize potential discovery. The Infinium assay has been optimized to yield high-quality, accurate results quickly. With proper setup, a single technician can process from a few hundred to over a thousand DNA samples per week, depending on the type of array. This assay guides users through every step, starting with genomic DNA and ending with the scanning of the array. Using propriety reagents, samples are amplified, fragmented, precipitated, resuspended, hybridized to the chip, extended by a single base, stained, and scanned on either an iScan or Hi Scan high-resolution optical imaging system. One overnight step is required to amplify the DNA. The DNA is denatured and isothermally amplified by whole-genome amplification; therefore, no PCR is required. Samples are hybridized to the arrays during a second overnight step. By the third day, the samples are ready to be scanned and analyzed. Amplified DNA may be stockpiled in large quantities, allowing bead arrays to be processed every day of the week, thereby maximizing throughput.
25 Related JoVE Articles!
Play Button
Measuring Ascending Aortic Stiffness In Vivo in Mice Using Ultrasound
Authors: Maggie M. Kuo, Viachaslau Barodka, Theodore P. Abraham, Jochen Steppan, Artin A. Shoukas, Mark Butlin, Alberto Avolio, Dan E. Berkowitz, Lakshmi Santhanam.
Institutions: Johns Hopkins University, Johns Hopkins University, Johns Hopkins University, Macquarie University.
We present a protocol for measuring in vivo aortic stiffness in mice using high-resolution ultrasound imaging. Aortic diameter is measured by ultrasound and aortic blood pressure is measured invasively with a solid-state pressure catheter. Blood pressure is raised then lowered incrementally by intravenous infusion of vasoactive drugs phenylephrine and sodium nitroprusside. Aortic diameter is measured for each pressure step to characterize the pressure-diameter relationship of the ascending aorta. Stiffness indices derived from the pressure-diameter relationship can be calculated from the data collected. Calculation of arterial compliance is described in this protocol. This technique can be used to investigate mechanisms underlying increased aortic stiffness associated with cardiovascular disease and aging. The technique produces a physiologically relevant measure of stiffness compared to ex vivo approaches because physiological influences on aortic stiffness are incorporated in the measurement. The primary limitation of this technique is the measurement error introduced from the movement of the aorta during the cardiac cycle. This motion can be compensated by adjusting the location of the probe with the aortic movement as well as making multiple measurements of the aortic pressure-diameter relationship and expanding the experimental group size.
Medicine, Issue 94, Aortic stiffness, ultrasound, in vivo, aortic compliance, elastic modulus, mouse model, cardiovascular disease
52200
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
51057
Play Button
Detection of the Genome and Transcripts of a Persistent DNA Virus in Neuronal Tissues by Fluorescent In situ Hybridization Combined with Immunostaining
Authors: Frédéric Catez, Antoine Rousseau, Marc Labetoulle, Patrick Lomonte.
Institutions: CNRS UMR 5534, Université de Lyon 1, LabEX DEVweCAN, CNRS UPR 3296, CNRS UMR 5286.
Single cell codetection of a gene, its RNA product and cellular regulatory proteins is critical to study gene expression regulation. This is a challenge in the field of virology; in particular for nuclear-replicating persistent DNA viruses that involve animal models for their study. Herpes simplex virus type 1 (HSV-1) establishes a life-long latent infection in peripheral neurons. Latent virus serves as reservoir, from which it reactivates and induces a new herpetic episode. The cell biology of HSV-1 latency remains poorly understood, in part due to the lack of methods to detect HSV-1 genomes in situ in animal models. We describe a DNA-fluorescent in situ hybridization (FISH) approach efficiently detecting low-copy viral genomes within sections of neuronal tissues from infected animal models. The method relies on heat-based antigen unmasking, and directly labeled home-made DNA probes, or commercially available probes. We developed a triple staining approach, combining DNA-FISH with RNA-FISH and immunofluorescence, using peroxidase based signal amplification to accommodate each staining requirement. A major improvement is the ability to obtain, within 10 µm tissue sections, low-background signals that can be imaged at high resolution by confocal microscopy and wide-field conventional epifluorescence. Additionally, the triple staining worked with a wide range of antibodies directed against cellular and viral proteins. The complete protocol takes 2.5 days to accommodate antibody and probe penetration within the tissue.
Neuroscience, Issue 83, Life Sciences (General), Virology, Herpes Simplex Virus (HSV), Latency, In situ hybridization, Nuclear organization, Gene expression, Microscopy
51091
Play Button
Rapid and Efficient Zebrafish Genotyping Using PCR with High-resolution Melt Analysis
Authors: Lingyan Xing, Tyler S. Quist, Tamara J. Stevenson, Timothy J. Dahlem, Joshua L. Bonkowsky.
Institutions: University of Utah School of Medicine, University of Utah School of Medicine, University of Utah School of Medicine, University of Utah School of Medicine, University of Utah School of Medicine.
Zebrafish is a powerful vertebrate model system for studying development, modeling disease, and performing drug screening. Recently a variety of genetic tools have been introduced, including multiple strategies for inducing mutations and generating transgenic lines. However, large-scale screening is limited by traditional genotyping methods, which are time-consuming and labor-intensive. Here we describe a technique to analyze zebrafish genotypes by PCR combined with high-resolution melting analysis (HRMA). This approach is rapid, sensitive, and inexpensive, with lower risk of contamination artifacts. Genotyping by PCR with HRMA can be used for embryos or adult fish, including in high-throughput screening protocols.
Basic Protocol, Issue 84, genotyping, high-resolution melting analysis (HRMA), PCR, zebrafish, mutation, transgenes
51138
Play Button
Isolation of mRNAs Associated with Yeast Mitochondria to Study Mechanisms of Localized Translation
Authors: Chen Lesnik, Yoav Arava.
Institutions: Technion - Israel Institute of Technology.
Most of mitochondrial proteins are encoded in the nucleus and need to be imported into the organelle. Import may occur while the protein is synthesized near the mitochondria. Support for this possibility is derived from recent studies, in which many mRNAs encoding mitochondrial proteins were shown to be localized to the mitochondria vicinity. Together with earlier demonstrations of ribosomes’ association with the outer membrane, these results suggest a localized translation process. Such localized translation may improve import efficiency, provide unique regulation sites and minimize cases of ectopic expression. Diverse methods have been used to characterize the factors and elements that mediate localized translation. Standard among these is subcellular fractionation by differential centrifugation. This protocol has the advantage of isolation of mRNAs, ribosomes and proteins in a single procedure. These can then be characterized by various molecular and biochemical methods. Furthermore, transcriptomics and proteomics methods can be applied to the resulting material, thereby allow genome-wide insights. The utilization of yeast as a model organism for such studies has the advantages of speed, costs and simplicity. Furthermore, the advanced genetic tools and available deletion strains facilitate verification of candidate factors.
Biochemistry, Issue 85, mitochondria, mRNA localization, Yeast, S. cerevisiae, microarray, localized translation, biochemical fractionation
51265
Play Button
Polysome Fractionation and Analysis of Mammalian Translatomes on a Genome-wide Scale
Authors: Valentina Gandin, Kristina Sikström, Tommy Alain, Masahiro Morita, Shannon McLaughlan, Ola Larsson, Ivan Topisirovic.
Institutions: McGill University, Karolinska Institutet, McGill University.
mRNA translation plays a central role in the regulation of gene expression and represents the most energy consuming process in mammalian cells. Accordingly, dysregulation of mRNA translation is considered to play a major role in a variety of pathological states including cancer. Ribosomes also host chaperones, which facilitate folding of nascent polypeptides, thereby modulating function and stability of newly synthesized polypeptides. In addition, emerging data indicate that ribosomes serve as a platform for a repertoire of signaling molecules, which are implicated in a variety of post-translational modifications of newly synthesized polypeptides as they emerge from the ribosome, and/or components of translational machinery. Herein, a well-established method of ribosome fractionation using sucrose density gradient centrifugation is described. In conjunction with the in-house developed “anota” algorithm this method allows direct determination of differential translation of individual mRNAs on a genome-wide scale. Moreover, this versatile protocol can be used for a variety of biochemical studies aiming to dissect the function of ribosome-associated protein complexes, including those that play a central role in folding and degradation of newly synthesized polypeptides.
Biochemistry, Issue 87, Cells, Eukaryota, Nutritional and Metabolic Diseases, Neoplasms, Metabolic Phenomena, Cell Physiological Phenomena, mRNA translation, ribosomes, protein synthesis, genome-wide analysis, translatome, mTOR, eIF4E, 4E-BP1
51455
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Analysis of Nephron Composition and Function in the Adult Zebrafish Kidney
Authors: Kristen K. McCampbell, Kristin N. Springer, Rebecca A. Wingert.
Institutions: University of Notre Dame.
The zebrafish model has emerged as a relevant system to study kidney development, regeneration and disease. Both the embryonic and adult zebrafish kidneys are composed of functional units known as nephrons, which are highly conserved with other vertebrates, including mammals. Research in zebrafish has recently demonstrated that two distinctive phenomena transpire after adult nephrons incur damage: first, there is robust regeneration within existing nephrons that replaces the destroyed tubule epithelial cells; second, entirely new nephrons are produced from renal progenitors in a process known as neonephrogenesis. In contrast, humans and other mammals seem to have only a limited ability for nephron epithelial regeneration. To date, the mechanisms responsible for these kidney regeneration phenomena remain poorly understood. Since adult zebrafish kidneys undergo both nephron epithelial regeneration and neonephrogenesis, they provide an outstanding experimental paradigm to study these events. Further, there is a wide range of genetic and pharmacological tools available in the zebrafish model that can be used to delineate the cellular and molecular mechanisms that regulate renal regeneration. One essential aspect of such research is the evaluation of nephron structure and function. This protocol describes a set of labeling techniques that can be used to gauge renal composition and test nephron functionality in the adult zebrafish kidney. Thus, these methods are widely applicable to the future phenotypic characterization of adult zebrafish kidney injury paradigms, which include but are not limited to, nephrotoxicant exposure regimes or genetic methods of targeted cell death such as the nitroreductase mediated cell ablation technique. Further, these methods could be used to study genetic perturbations in adult kidney formation and could also be applied to assess renal status during chronic disease modeling.
Cellular Biology, Issue 90, zebrafish; kidney; nephron; nephrology; renal; regeneration; proximal tubule; distal tubule; segment; mesonephros; physiology; acute kidney injury (AKI)
51644
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Quantification of Orofacial Phenotypes in Xenopus
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Institutions: Virginia Commonwealth University.
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
Developmental Biology, Issue 93, Orofacial quantification, geometric morphometrics, Xenopus, orofacial development, orofacial defects, shape changes, facial dimensions
52062
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
An Allele-specific Gene Expression Assay to Test the Functional Basis of Genetic Associations
Authors: Silvia Paracchini, Anthony P. Monaco, Julian C. Knight.
Institutions: University of Oxford.
The number of significant genetic associations with common complex traits is constantly increasing. However, most of these associations have not been understood at molecular level. One of the mechanisms mediating the effect of DNA variants on phenotypes is gene expression, which has been shown to be particularly relevant for complex traits1. This method tests in a cellular context the effect of specific DNA sequences on gene expression. The principle is to measure the relative abundance of transcripts arising from the two alleles of a gene, analysing cells which carry one copy of the DNA sequences associated with disease (the risk variants)2,3. Therefore, the cells used for this method should meet two fundamental genotypic requirements: they have to be heterozygous both for DNA risk variants and for DNA markers, typically coding polymorphisms, which can distinguish transcripts based on their chromosomal origin (Figure 1). DNA risk variants and DNA markers do not need to have the same allele frequency but the phase (haplotypic) relationship of the genetic markers needs to be understood. It is also important to choose cell types which express the gene of interest. This protocol refers specifically to the procedure adopted to extract nucleic acids from fibroblasts but the method is equally applicable to other cells types including primary cells. DNA and RNA are extracted from the selected cell lines and cDNA is generated. DNA and cDNA are analysed with a primer extension assay, designed to target the coding DNA markers4. The primer extension assay is carried out using the MassARRAY (Sequenom)5 platform according to the manufacturer's specifications. Primer extension products are then analysed by matrix-assisted laser desorption/ionization time of-flight mass spectrometry (MALDI-TOF/MS). Because the selected markers are heterozygous they will generate two peaks on the MS profiles. The area of each peak is proportional to the transcript abundance and can be measured with a function of the MassARRAY Typer software to generate an allelic ratio (allele 1: allele 2) calculation. The allelic ratio obtained for cDNA is normalized using that measured from genomic DNA, where the allelic ratio is expected to be 1:1 to correct for technical artifacts. Markers with a normalised allelic ratio significantly different to 1 indicate that the amount of transcript generated from the two chromosomes in the same cell is different, suggesting that the DNA variants associated with the phenotype have an effect on gene expression. Experimental controls should be used to confirm the results.
Cellular Biology, Issue 45, Gene expression, regulatory variant, haplotype, association study, primer extension, MALDI-TOF mass spectrometry, single nucleotide polymorphism, allele-specific
2279
Play Button
Isolation of Fidelity Variants of RNA Viruses and Characterization of Virus Mutation Frequency
Authors: Stéphanie Beaucourt, Antonio V. Bordería, Lark L. Coffey, Nina F. Gnädig, Marta Sanz-Ramos, Yasnee Beeharry, Marco Vignuzzi.
Institutions: Institut Pasteur .
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7.
Immunology, Issue 52, Polymerase fidelity, RNA virus, mutation frequency, mutagen, RNA polymerase, viral evolution
2953
Play Button
Genotyping of Plant and Animal Samples without Prior DNA Purification
Authors: Pak Y. Chum, Josh D. Haimes, Chas P. André, Pia K. Kuusisto, Melissa L. Kelley.
Institutions: Thermo Fisher Scientific.
The Direct PCR approach facilitates PCR amplification directly from small amounts of unpurified samples, and is demonstrated here for several plant and animal tissues (Figure 1). Direct PCR is based on specially engineered Thermo Scientific Phusion and Phire DNA Polymerases, which include a double-stranded DNA binding domain that gives them unique properties such as high tolerance of inhibitors. PCR-based target DNA detection has numerous applications in plant research, including plant genotype analysis and verification of transgenes. PCR from plant tissues traditionally involves an initial DNA isolation step, which may require expensive or toxic reagents. The process is time consuming and increases the risk of cross contamination1, 2. Conversely, by using Thermo Scientific Phire Plant Direct PCR Kit the target DNA can be easily detected, without prior DNA extraction. In the model demonstrated here, an example of derived cleaved amplified polymorphic sequence analysis (dCAPS)3,4 is performed directly from Arabidopsis plant leaves. dCAPS genotyping assays can be used to identify single nucleotide polymorphisms (SNPs) by SNP allele-specific restriction endonuclease digestion3. Some plant samples tend to be more challenging when using Direct PCR methods as they contain components that interfere with PCR, such as phenolic compounds. In these cases, an additional step to remove the compounds is traditionally required2,5. Here, this problem is overcome by using a quick and easy dilution protocol followed by Direct PCR amplification (Figure 1). Fifteen year-old oak leaves are used as a model for challenging plants as the specimen contains high amounts of phenolic compounds including tannins. Gene transfer into mice is broadly used to study the roles of genes in development, physiology and human disease. The use of these animals requires screening for the presence of the transgene, usually with PCR. Traditionally, this involves a time consuming DNA isolation step, during which DNA for PCR analysis is purified from ear, tail or toe tissues6,7. However, with the Thermo Scientific Phire Animal Tissue Direct PCR Kit transgenic mice can be genotyped without prior DNA purification. In this protocol transgenic mouse genotyping is achieved directly from mouse ear tissues, as demonstrated here for a challenging example where only one primer set is used for amplification of two fragments differing greatly in size.
Genetics, Issue 67, Molecular Biology, Plant Biology, Medicine, Direct PCR, DNA amplification, DNA purification, dCAPS, PCR-based target DNA detection, genotyping, Arabidopsis, oak, mouse tissues
3844
Play Button
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Authors: Francesco Vallania, Enrique Ramos, Sharon Cresci, Robi D. Mitra, Todd E. Druley.
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators. To address this need, we have developed a pooled sequencing approach1,9 and a novel software package1 for highly accurate rare variant detection from the resulting data. The ability to pool genomes from entire populations of affected individuals and survey the degree of genetic variation at multiple targeted regions in a single sequencing library provides excellent cost and time savings to traditional single-sample sequencing methodology. With a mean sequencing coverage per allele of 25-fold, our custom algorithm, SPLINTER, uses an internal variant calling control strategy to call insertions, deletions and substitutions up to four base pairs in length with high sensitivity and specificity from pools of up to 1 mutant allele in 500 individuals. Here we describe the method for preparing the pooled sequencing library followed by step-by-step instructions on how to use the SPLINTER package for pooled sequencing analysis (http://www.ibridgenetwork.org/wustl/splinter). We show a comparison between pooled sequencing of 947 individuals, all of whom also underwent genome-wide array, at over 20kb of sequencing per person. Concordance between genotyping of tagged and novel variants called in the pooled sample were excellent. This method can be easily scaled up to any number of genomic loci and any number of individuals. By incorporating the internal positive and negative amplicon controls at ratios that mimic the population under study, the algorithm can be calibrated for optimal performance. This strategy can also be modified for use with hybridization capture or individual-specific barcodes and can be applied to the sequencing of naturally heterogeneous samples, such as tumor DNA.
Genetics, Issue 64, Genomics, Cancer Biology, Bioinformatics, Pooled DNA sequencing, SPLINTER, rare genetic variants, genetic screening, phenotype, high throughput, computational analysis, DNA, PCR, primers
3943
Play Button
A Practical and Novel Method to Extract Genomic DNA from Blood Collection Kits for Plasma Protein Preservation
Authors: Jon Waters, Vishal Dhere, Adam Benjamin, Arvind Sekar, Archana Kumar, Sampath Prahalad, David T. Okou, Subra Kugathasan.
Institutions: Emory University School of Medicine and Children's Health Care of Atlanta, Emory University School of Medicine and Children's Health Care of Atlanta.
Laboratory tests can be done on the cellular or fluid portions of the blood. The use of different blood collection tubes determines the portion of the blood that can be analyzed (whole blood, plasma or serum). Laboratories involved in studying the genetic basis of human disorders rely on anticoagulated whole blood collected in EDTA-containing vacutainer as the source of DNA for genetic / genomic analysis. Because most clinical laboratories perform biochemical, serologic and viral testing as a first step in phenotypic outcome investigation, anticoagulated blood is also collected in heparin-containing tube (plasma tube). Therefore when DNA and plasma are needed for simultaneous and parallel analyses of both genomic and proteomic data, it is customary to collect blood in both EDTA and heparin tubes. If blood could be collected in a single tube and serve as a source for both plasma and DNA, that method would be considered an advancement to existing methods. The use of the compacted blood after plasma extraction represents an alternative source for genomic DNA, thus minimizing the amount of blood samples processed and reducing the number of samples required from each patient. This would ultimately save time and resources. The BD P100 blood collection system for plasma protein preservation were created as an improved method over previous plasma or serum collection tubes1, to stabilize the protein content of blood, enabling better protein biomarker discovery and proteomics experimentation from human blood. The BD P100 tubes contain 15.8 ml of spray-dried K2EDTA and a lyophilized proprietary broad spectrum cocktail of protease inhibitors to prevent coagulation and stabilize the plasma proteins. They also include a mechanical separator, which provides a physical barrier between plasma and cell pellets after centrifugation. Few methods have been devised to extract DNA from clotted blood samples collected in old plasma tubes2-4. Challenges from these methods were mainly associated with the type of separator inside the tubes (gel separator) and included difficulty in recovering the clotted blood, the inconvenience of fragmenting or dispersing the clot, and obstruction of the clot extraction by the separation gel. We present the first method that extracts and purifies genomic DNA from blood drawn in the new BD P100 tubes. We compare the quality of the DNA sample from P100 tubes to that from EDTA tubes. Our approach is simple and efficient. It involves four major steps as follows: 1) the use of a plasma BD P100 (BD Diagnostics, Sparks, MD, USA) tube with mechanical separator for blood collection, 2) the removal of the mechanical separator using a combination of sucrose and a sterile paperclip metallic hook, 3) the separation of the buffy coat layer containing the white cells and 4) the isolation of the genomic DNA from the buffy coat using a regular commercial DNA extraction kit or a similar standard protocol.
Genetics, Issue 75, Molecular Biology, Cellular Biology, Medicine, Biochemistry, Hematology, Proteins, Genomics, genomic DNA, blood collection, P100 tubes, DNA extraction, buffy coat isolation, genotyping assays, red blood, whole blood, plasma, DNA, assay, genotyping
4241
Play Button
Combined Immunofluorescence and DNA FISH on 3D-preserved Interphase Nuclei to Study Changes in 3D Nuclear Organization
Authors: Julie Chaumeil, Mariann Micsinai, Jane A. Skok.
Institutions: New York University School of Medicine, New York University Center for Health Informatics and Bioinformatics, NYU Cancer Institute, Yale University School of Medicine .
Fluorescent in situ hybridization using DNA probes on 3-dimensionally preserved nuclei followed by 3D confocal microscopy (3D DNA FISH) represents the most direct way to visualize the location of gene loci, chromosomal sub-regions or entire territories in individual cells. This type of analysis provides insight into the global architecture of the nucleus as well as the behavior of specific genomic loci and regions within the nuclear space. Immunofluorescence, on the other hand, permits the detection of nuclear proteins (modified histones, histone variants and modifiers, transcription machinery and factors, nuclear sub-compartments, etc). The major challenge in combining immunofluorescence and 3D DNA FISH is, on the one hand to preserve the epitope detected by the antibody as well as the 3D architecture of the nucleus, and on the other hand, to allow the penetration of the DNA probe to detect gene loci or chromosome territories 1-5. Here we provide a protocol that combines visualization of chromatin modifications with genomic loci in 3D preserved nuclei.
Genetics, Issue 72, Molecular Biology, Bioinformatics, Cancer Biology, Pathology, Biomedical Engineering, Immunology, Intranuclear Space, Nuclear Matrix, Fluorescence in situ Hybridization, FISH, 3D DNA FISH, DNA, immunofluorescence, immuno-FISH, 3D microscopy, Nuclear organization, interphase nuclei, chromatin modifications
50087
Play Button
Generation of High Quality Chromatin Immunoprecipitation DNA Template for High-throughput Sequencing (ChIP-seq)
Authors: Sandra Deliard, Jianhua Zhao, Qianghua Xia, Struan F.A. Grant.
Institutions: Children's Hospital of Philadelphia Research Institute, University of Pennsylvania .
ChIP-sequencing (ChIP-seq) methods directly offer whole-genome coverage, where combining chromatin immunoprecipitation (ChIP) and massively parallel sequencing can be utilized to identify the repertoire of mammalian DNA sequences bound by transcription factors in vivo. "Next-generation" genome sequencing technologies provide 1-2 orders of magnitude increase in the amount of sequence that can be cost-effectively generated over older technologies thus allowing for ChIP-seq methods to directly provide whole-genome coverage for effective profiling of mammalian protein-DNA interactions. For successful ChIP-seq approaches, one must generate high quality ChIP DNA template to obtain the best sequencing outcomes. The description is based around experience with the protein product of the gene most strongly implicated in the pathogenesis of type 2 diabetes, namely the transcription factor transcription factor 7-like 2 (TCF7L2). This factor has also been implicated in various cancers. Outlined is how to generate high quality ChIP DNA template derived from the colorectal carcinoma cell line, HCT116, in order to build a high-resolution map through sequencing to determine the genes bound by TCF7L2, giving further insight in to its key role in the pathogenesis of complex traits.
Molecular Biology, Issue 74, Genetics, Biochemistry, Microbiology, Medicine, Proteins, DNA-Binding Proteins, Transcription Factors, Chromatin Immunoprecipitation, Genes, chromatin, immunoprecipitation, ChIP, DNA, PCR, sequencing, antibody, cross-link, cell culture, assay
50286
Play Button
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Authors: Phoebe Spetsieris, Yilong Ma, Shichun Peng, Ji Hyun Ko, Vijay Dhawan, Chris C. Tang, David Eidelberg.
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
50319
Play Button
In Vivo Modeling of the Morbid Human Genome using Danio rerio
Authors: Adrienne R. Niederriter, Erica E. Davis, Christelle Golzio, Edwin C. Oh, I-Chun Tsai, Nicholas Katsanis.
Institutions: Duke University Medical Center, Duke University, Duke University Medical Center.
Here, we present methods for the development of assays to query potentially clinically significant nonsynonymous changes using in vivo complementation in zebrafish. Zebrafish (Danio rerio) are a useful animal system due to their experimental tractability; embryos are transparent to enable facile viewing, undergo rapid development ex vivo, and can be genetically manipulated.1 These aspects have allowed for significant advances in the analysis of embryogenesis, molecular processes, and morphogenetic signaling. Taken together, the advantages of this vertebrate model make zebrafish highly amenable to modeling the developmental defects in pediatric disease, and in some cases, adult-onset disorders. Because the zebrafish genome is highly conserved with that of humans (~70% orthologous), it is possible to recapitulate human disease states in zebrafish. This is accomplished either through the injection of mutant human mRNA to induce dominant negative or gain of function alleles, or utilization of morpholino (MO) antisense oligonucleotides to suppress genes to mimic loss of function variants. Through complementation of MO-induced phenotypes with capped human mRNA, our approach enables the interpretation of the deleterious effect of mutations on human protein sequence based on the ability of mutant mRNA to rescue a measurable, physiologically relevant phenotype. Modeling of the human disease alleles occurs through microinjection of zebrafish embryos with MO and/or human mRNA at the 1-4 cell stage, and phenotyping up to seven days post fertilization (dpf). This general strategy can be extended to a wide range of disease phenotypes, as demonstrated in the following protocol. We present our established models for morphogenetic signaling, craniofacial, cardiac, vascular integrity, renal function, and skeletal muscle disorder phenotypes, as well as others.
Molecular Biology, Issue 78, Genetics, Biomedical Engineering, Medicine, Developmental Biology, Biochemistry, Anatomy, Physiology, Bioengineering, Genomics, Medical, zebrafish, in vivo, morpholino, human disease modeling, transcription, PCR, mRNA, DNA, Danio rerio, animal model
50338
Play Button
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Authors: Gauthier Julie, Fadi F. Hamdan, Guy A. Rouleau.
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1 and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo mutations. This is the case for autism and schizophrenia3. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo mutations would more frequently come from males, particularly older males4. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
2534
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
1988
Play Button
Pyrosequencing: A Simple Method for Accurate Genotyping
Authors: Cristi King, Tiffany Scott-Horton.
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR
630
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.