JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Statistical inference of selection and divergence from a time-dependent Poisson random field model.
We apply a recently developed time-dependent Poisson random field model to aligned DNA sequences from two related biological species to estimate selection coefficients and divergence time. We use Markov chain Monte Carlo methods to estimate species divergence time and selection coefficients for each locus. The model assumes that the selective effects of non-synonymous mutations are normally distributed across genetic loci but constant within loci, and synonymous mutations are selectively neutral. In contrast with previous models, we do not assume that the individual species are at population equilibrium after divergence. Using a data set of 91 genes in two Drosophila species, D. melanogaster and D. simulans, we estimate the species divergence time t(div) = 2.16 N(e) (or 1.68 million years, assuming the haploid effective population size N(e) = 6.45 x 10(5) years) and a mean selection coefficient per generation ?(?) = 1.98/N(e). Although the average selection coefficient is positive, the magnitude of the selection is quite small. Results from numerical simulations are also presented as an accuracy check for the time-dependent model.
Authors: Stéphanie Beaucourt, Antonio V. Bordería, Lark L. Coffey, Nina F. Gnädig, Marta Sanz-Ramos, Yasnee Beeharry, Marco Vignuzzi.
Published: 06-16-2011
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7.
25 Related JoVE Articles!
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Easy Measurement of Diffusion Coefficients of EGFP-tagged Plasma Membrane Proteins Using k-Space Image Correlation Spectroscopy
Authors: Eva C. Arnspang, Jennifer S. Koffman, Saw Marlar, Paul W. Wiseman, Lene N. Nejsum.
Institutions: Aarhus University, McGill University.
Lateral diffusion and compartmentalization of plasma membrane proteins are tightly regulated in cells and thus, studying these processes will reveal new insights to plasma membrane protein function and regulation. Recently, k-Space Image Correlation Spectroscopy (kICS)1 was developed to enable routine measurements of diffusion coefficients directly from images of fluorescently tagged plasma membrane proteins, that avoided systematic biases introduced by probe photophysics. Although the theoretical basis for the analysis is complex, the method can be implemented by nonexperts using a freely available code to measure diffusion coefficients of proteins. kICS calculates a time correlation function from a fluorescence microscopy image stack after Fourier transformation of each image to reciprocal (k-) space. Subsequently, circular averaging, natural logarithm transform and linear fits to the correlation function yields the diffusion coefficient. This paper provides a step-by-step guide to the image analysis and measurement of diffusion coefficients via kICS. First, a high frame rate image sequence of a fluorescently labeled plasma membrane protein is acquired using a fluorescence microscope. Then, a region of interest (ROI) avoiding intracellular organelles, moving vesicles or protruding membrane regions is selected. The ROI stack is imported into a freely available code and several defined parameters (see Method section) are set for kICS analysis. The program then generates a "slope of slopes" plot from the k-space time correlation functions, and the diffusion coefficient is calculated from the slope of the plot. Below is a step-by-step kICS procedure to measure the diffusion coefficient of a membrane protein using the renal water channel aquaporin-3 tagged with EGFP as a canonical example.
Biophysics, Issue 87, Amino Acids, Peptides and Proteins, Computer Programming and Software, Diffusion coefficient, Aquaporin-3, k-Space Image Correlation Spectroscopy, Analysis
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Play Button
Assessing Differences in Sperm Competitive Ability in Drosophila
Authors: Shu-Dan Yeh, Carolus Chan, José M. Ranz.
Institutions: University of California, Irvine.
Competition among conspecific males for fertilizing the ova is one of the mechanisms of sexual selection, i.e. selection that operates on maximizing the number of successful mating events rather than on maximizing survival and viability 1. Sperm competition represents the competition between males after copulating with the same female 2, in which their sperm are coincidental in time and space. This phenomenon has been reported in multiple species of plants and animals 3. For example, wild-caught D. melanogaster females usually contain sperm from 2-3 males 4. The sperm are stored in specialized organs with limited storage capacity, which might lead to the direct competition of the sperm from different males 2,5. Comparing sperm competitive ability of different males of interest (experimental male types) has been performed through controlled double-mating experiments in the laboratory 6,7. Briefly, a single female is exposed to two different males consecutively, one experimental male and one cross-mating reference male. The same mating scheme is then followed using other experimental male types thus facilitating the indirect comparison of the competitive ability of their sperm through a common reference. The fraction of individuals fathered by the experimental and reference males is identified using markers, which allows one to estimate sperm competitive ability using simple mathematical expressions 7,8. In addition, sperm competitive ability can be estimated in two different scenarios depending on whether the experimental male is second or first to mate (offense and defense assay, respectively) 9, which is assumed to be reflective of different competence attributes. Here, we describe an approach that helps to interrogate the role of different genetic factors that putatively underlie the phenomenon of sperm competitive ability in D. melanogaster.
Developmental Biology, Issue 78, Molecular Biology, Cellular Biology, Genetics, Biochemistry, Spermatozoa, Drosophila melanogaster, Biological Evolution, Phenotype, genetics (animal and plant), animal biology, double-mating experiment, sperm competitive ability, male fertility, Drosophila, fruit fly, animal model
Play Button
Experimental Manipulation of Body Size to Estimate Morphological Scaling Relationships in Drosophila
Authors: R. Craig Stillwell, Ian Dworkin, Alexander W. Shingleton, W. Anthony Frankino.
Institutions: University of Houston, Michigan State University.
The scaling of body parts is a central feature of animal morphology1-7. Within species, morphological traits need to be correctly proportioned to the body for the organism to function; larger individuals typically have larger body parts and smaller individuals generally have smaller body parts, such that overall body shape is maintained across a range of adult body sizes. The requirement for correct proportions means that individuals within species usually exhibit low variation in relative trait size. In contrast, relative trait size can vary dramatically among species and is a primary mechanism by which morphological diversity is produced. Over a century of comparative work has established these intra- and interspecific patterns3,4. Perhaps the most widely used approach to describe this variation is to calculate the scaling relationship between the size of two morphological traits using the allometric equation y=bxα, where x and y are the size of the two traits, such as organ and body size8,9. This equation describes the within-group (e.g., species, population) scaling relationship between two traits as both vary in size. Log-transformation of this equation produces a simple linear equation, log(y) = log(b) + αlog(x) and log-log plots of the size of different traits among individuals of the same species typically reveal linear scaling with an intercept of log(b) and a slope of α, called the 'allometric coefficient'9,10. Morphological variation among groups is described by differences in scaling relationship intercepts or slopes for a given trait pair. Consequently, variation in the parameters of the allometric equation (b and α) elegantly describes the shape variation captured in the relationship between organ and body size within and among biological groups (see 11,12). Not all traits scale linearly with each other or with body size (e.g., 13,14) Hence, morphological scaling relationships are most informative when the data are taken from the full range of trait sizes. Here we describe how simple experimental manipulation of diet can be used to produce the full range of body size in insects. This permits an estimation of the full scaling relationship for any given pair of traits, allowing a complete description of how shape covaries with size and a robust comparison of scaling relationship parameters among biological groups. Although we focus on Drosophila, our methodology should be applicable to nearly any fully metamorphic insect.
Developmental Biology, Issue 56, Drosophila, allometry, morphology, body size, scaling, insect
Play Button
Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays
Authors: Alla Gagarinova, Mohan Babu, Jack Greenblatt, Andrew Emili.
Institutions: University of Toronto, University of Toronto, University of Regina.
Phenotypes are determined by a complex series of physical (e.g. protein-protein) and functional (e.g. gene-gene or genetic) interactions (GI)1. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7, but GI information remains sparse for prokaryotes8, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10. Here, we present the key steps required to perform quantitative E. coli Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format. Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g. the 'Keio' collection11) and essential gene hypomorphic mutations (i.e. alleles conferring reduced protein expression, stability, or activity9, 12, 13) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e. slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2 as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9.
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Monitoring Intraspecies Competition in a Bacterial Cell Population by Cocultivation of Fluorescently Labelled Strains
Authors: Lorena Stannek, Richard Egelkamp, Katrin Gunka, Fabian M. Commichau.
Institutions: Georg-August University.
Many microorganisms such as bacteria proliferate extremely fast and the populations may reach high cell densities. Small fractions of cells in a population always have accumulated mutations that are either detrimental or beneficial for the cell. If the fitness effect of a mutation provides the subpopulation with a strong selective growth advantage, the individuals of this subpopulation may rapidly outcompete and even completely eliminate their immediate fellows. Thus, small genetic changes and selection-driven accumulation of cells that have acquired beneficial mutations may lead to a complete shift of the genotype of a cell population. Here we present a procedure to monitor the rapid clonal expansion and elimination of beneficial and detrimental mutations, respectively, in a bacterial cell population over time by cocultivation of fluorescently labeled individuals of the Gram-positive model bacterium Bacillus subtilis. The method is easy to perform and very illustrative to display intraspecies competition among the individuals in a bacterial cell population.
Cellular Biology, Issue 83, Bacillus subtilis, evolution, adaptation, selective pressure, beneficial mutation, intraspecies competition, fluorophore-labelling, Fluorescence Microscopy
Play Button
Magnetic Tweezers for the Measurement of Twist and Torque
Authors: Jan Lipfert, Mina Lee, Orkide Ordu, Jacob W. J. Kerssemakers, Nynke H. Dekker.
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Play Button
Transgenic Rodent Assay for Quantifying Male Germ Cell Mutant Frequency
Authors: Jason M. O'Brien, Marc A. Beal, John D. Gingerich, Lynda Soper, George R. Douglas, Carole L. Yauk, Francesco Marchetti.
Institutions: Environmental Health Centre.
De novo mutations arise mostly in the male germline and may contribute to adverse health outcomes in subsequent generations. Traditional methods for assessing the induction of germ cell mutations require the use of large numbers of animals, making them impractical. As such, germ cell mutagenicity is rarely assessed during chemical testing and risk assessment. Herein, we describe an in vivo male germ cell mutation assay using a transgenic rodent model that is based on a recently approved Organisation for Economic Co-operation and Development (OECD) test guideline. This method uses an in vitro positive selection assay to measure in vivo mutations induced in a transgenic λgt10 vector bearing a reporter gene directly in the germ cells of exposed males. We further describe how the detection of mutations in the transgene recovered from germ cells can be used to characterize the stage-specific sensitivity of the various spermatogenic cell types to mutagen exposure by controlling three experimental parameters: the duration of exposure (administration time), the time between exposure and sample collection (sampling time), and the cell population collected for analysis. Because a large number of germ cells can be assayed from a single male, this method has superior sensitivity compared with traditional methods, requires fewer animals and therefore much less time and resources.
Genetics, Issue 90, sperm, spermatogonia, male germ cells, spermatogenesis, de novo mutation, OECD TG 488, transgenic rodent mutation assay, N-ethyl-N-nitrosourea, genetic toxicology
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Authors: Melissa N. Patterson, Patrick H. Maxwell.
Institutions: Rensselaer Polytechnic Institute.
Saccharomyces cerevisiae has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
Play Button
Genetic Manipulation in Δku80 Strains for Functional Genomic Analysis of Toxoplasma gondii
Authors: Leah M. Rommereim, Miryam A. Hortua Triana, Alejandra Falla, Kiah L. Sanders, Rebekah B. Guevara, David J. Bzik, Barbara A. Fox.
Institutions: The Geisel School of Medicine at Dartmouth.
Targeted genetic manipulation using homologous recombination is the method of choice for functional genomic analysis to obtain a detailed view of gene function and phenotype(s). The development of mutant strains with targeted gene deletions, targeted mutations, complemented gene function, and/or tagged genes provides powerful strategies to address gene function, particularly if these genetic manipulations can be efficiently targeted to the gene locus of interest using integration mediated by double cross over homologous recombination. Due to very high rates of nonhomologous recombination, functional genomic analysis of Toxoplasma gondii has been previously limited by the absence of efficient methods for targeting gene deletions and gene replacements to specific genetic loci. Recently, we abolished the major pathway of nonhomologous recombination in type I and type II strains of T. gondii by deleting the gene encoding the KU80 protein1,2. The Δku80 strains behave normally during tachyzoite (acute) and bradyzoite (chronic) stages in vitro and in vivo and exhibit essentially a 100% frequency of homologous recombination. The Δku80 strains make functional genomic studies feasible on the single gene as well as on the genome scale1-4. Here, we report methods for using type I and type II Δku80Δhxgprt strains to advance gene targeting approaches in T. gondii. We outline efficient methods for generating gene deletions, gene replacements, and tagged genes by targeted insertion or deletion of the hypoxanthine-xanthine-guanine phosphoribosyltransferase (HXGPRT) selectable marker. The described gene targeting protocol can be used in a variety of ways in Δku80 strains to advance functional analysis of the parasite genome and to develop single strains that carry multiple targeted genetic manipulations. The application of this genetic method and subsequent phenotypic assays will reveal fundamental and unique aspects of the biology of T. gondii and related significant human pathogens that cause malaria (Plasmodium sp.) and cryptosporidiosis (Cryptosporidium).
Infectious Diseases, Issue 77, Genetics, Microbiology, Infection, Medicine, Immunology, Molecular Biology, Cellular Biology, Biomedical Engineering, Bioengineering, Genomics, Parasitology, Pathology, Apicomplexa, Coccidia, Toxoplasma, Genetic Techniques, Gene Targeting, Eukaryota, Toxoplasma gondii, genetic manipulation, gene targeting, gene deletion, gene replacement, gene tagging, homologous recombination, DNA, sequencing
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Electroporation of Mycobacteria
Authors: Renan Goude, Tanya Parish.
Institutions: Barts and the London School of Medicine and Dentistry, Barts and the London School of Medicine and Dentistry.
High efficiency transformation is a major limitation in the study of mycobacteria. The genus Mycobacterium can be difficult to transform; this is mainly caused by the thick and waxy cell wall, but is compounded by the fact that most molecular techniques have been developed for distantly-related species such as Escherichia coli and Bacillus subtilis. In spite of these obstacles, mycobacterial plasmids have been identified and DNA transformation of many mycobacterial species have now been described. The most successful method for introducing DNA into mycobacteria is electroporation. Many parameters contribute to successful transformation; these include the species/strain, the nature of the transforming DNA, the selectable marker used, the growth medium, and the conditions for the electroporation pulse. Optimized methods for the transformation of both slow- and fast-grower are detailed here. Transformation efficiencies for different mycobacterial species and with various selectable markers are reported.
Microbiology, Issue 15, Springer Protocols, Mycobacteria, Electroporation, Bacterial Transformation, Transformation Efficiency, Bacteria, Tuberculosis, M. Smegmatis, Springer Protocols
Play Button
A Rapid Technique for the Visualization of Live Immobilized Yeast Cells
Authors: Karl Zawadzki, James Broach.
Institutions: Princeton University.
We present here a simple, rapid, and extremely flexible technique for the immobilization and visualization of growing yeast cells by epifluorescence microscopy. The technique is equally suited for visualization of static yeast populations, or time courses experiments up to ten hours in length. My microscopy investigates epigenetic inheritance at the silent mating loci in S. cerevisiae. There are two silent mating loci, HML and HMR, which are normally not expressed as they are packaged in heterochromatin. In the sir1 mutant background silencing is weakened such that each locus can either be in the expressed or silenced epigenetic state, so in the population as a whole there is a mix of cells of different epigenetic states for both HML and HMR. My microscopy demonstrated that there is no relationship between the epigenetic state of HML and HMR in an individual cell. sir1 cells stochastically switch epigenetic states, establishing silencing at a previously expressed locus or expressing a previously silenced locus. My time course microscopy tracked individual sir1 cells and their offspring to score the frequency of each of the four possible epigenetic switches, and thus the stability of each of the epigenetic states in sir1 cells. See also Xu et al., Mol. Cell 2006.
Microbiology, Issue 1, yeast, HML, HMR, epigenetic, loci, silencing, cerevisiae
Play Button
Quantitative Comparison of cis-Regulatory Element (CRE) Activities in Transgenic Drosophila melanogaster
Authors: William A. Rogers, Thomas M. Williams.
Institutions: University of Dayton, University of Dayton.
Gene expression patterns are specified by cis-regulatory element (CRE) sequences, which are also called enhancers or cis-regulatory modules. A typical CRE possesses an arrangement of binding sites for several transcription factor proteins that confer a regulatory logic specifying when, where, and at what level the regulated gene(s) is expressed. The full set of CREs within an animal genome encodes the organism′s program for development1, and empirical as well as theoretical studies indicate that mutations in CREs played a prominent role in morphological evolution2-4. Moreover, human genome wide association studies indicate that genetic variation in CREs contribute substantially to phenotypic variation5,6. Thus, understanding regulatory logic and how mutations affect such logic is a central goal of genetics. Reporter transgenes provide a powerful method to study the in vivo function of CREs. Here a known or suspected CRE sequence is coupled to heterologous promoter and coding sequences for a reporter gene encoding an easily observable protein product. When a reporter transgene is inserted into a host organism, the CRE′s activity becomes visible in the form of the encoded reporter protein. P-element mediated transgenesis in the fruit fly species Drosophila (D.) melanogaster7 has been used for decades to introduce reporter transgenes into this model organism, though the genomic placement of transgenes is random. Hence, reporter gene activity is strongly influenced by the local chromatin and gene environment, limiting CRE comparisons to being qualitative. In recent years, the phiC31 based integration system was adapted for use in D. melanogaster to insert transgenes into specific genome landing sites8-10. This capability has made the quantitative measurement of gene and, relevant here, CRE activity11-13 feasible. The production of transgenic fruit flies can be outsourced, including phiC31-based integration, eliminating the need to purchase expensive equipment and/or have proficiency at specialized transgene injection protocols. Here, we present a general protocol to quantitatively evaluate a CRE′s activity, and show how this approach can be used to measure the effects of an introduced mutation on a CRE′s activity and to compare the activities of orthologous CREs. Although the examples given are for a CRE active during fruit fly metamorphosis, the approach can be applied to other developmental stages, fruit fly species, or model organisms. Ultimately, a more widespread use of this approach to study CREs should advance an understanding of regulatory logic and how logic can vary and evolve.
Developmental Biology, Issue 58, Cis-regulatory element, CRE, cis-regulatory module, enhancer, site-specific integration, reporter transgenes, confocal microscopy, regulatory logic, transcription factors, binding sites, Drosophila melanogaster, Drosophila
Play Button
Characterizing Herbivore Resistance Mechanisms: Spittlebugs on Brachiaria spp. as an Example
Authors: Soroush Parsa, Guillermo Sotelo, Cesar Cardona.
Institutions: CIAT.
Plants can resist herbivore damage through three broad mechanisms: antixenosis, antibiosis and tolerance1. Antixenosis is the degree to which the plant is avoided when the herbivore is able to select other plants2. Antibiosis is the degree to which the plant affects the fitness of the herbivore feeding on it1.Tolerance is the degree to which the plant can withstand or repair damage caused by the herbivore, without compromising the herbivore's growth and reproduction1. The durability of herbivore resistance in an agricultural setting depends to a great extent on the resistance mechanism favored during crop breeding efforts3. We demonstrate a no-choice experiment designed to estimate the relative contributions of antibiosis and tolerance to spittlebug resistance in Brachiaria spp. Several species of African grasses of the genus Brachiaria are valuable forage and pasture plants in the Neotropics, but they can be severely challenged by several native species of spittlebugs (Hemiptera: Cercopidae)4.To assess their resistance to spittlebugs, plants are vegetatively-propagated by stem cuttings and allowed to grow for approximately one month, allowing the growth of superficial roots on which spittlebugs can feed. At that point, each test plant is individually challenged with six spittlebug eggs near hatching. Infestations are allowed to progress for one month before evaluating plant damage and insect survival. Scoring plant damage provides an estimate of tolerance while scoring insect survival provides an estimate of antibiosis. This protocol has facilitated our plant breeding objective to enhance spittlebug resistance in commercial brachiariagrases5.
Plant Biology, Issue 52, host plant resistance, antibiosis, antixenosis, tolerance, Brachiaria, spittlebugs
Play Button
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Authors: Gauthier Julie, Fadi F. Hamdan, Guy A. Rouleau.
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1 and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo mutations. This is the case for autism and schizophrenia3. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo mutations would more frequently come from males, particularly older males4. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
Play Button
Enrichment of NK Cells from Human Blood with the RosetteSep Kit from StemCell Technologies
Authors: Christine Beeton, K. George Chandy.
Institutions: University of California, Irvine (UCI).
Natural killer (NK) cells are large granular cytotoxic lymphocytes that belong to the innate immune system and play major roles in fighting against cancer and infections, but are also implicated in the early stages of pregnancy and transplant rejection. These cells are present in peripheral blood, from which they can be isolated. Cells can be isolated using either positive or negative selection. For positive selection we use antibodies directed to a surface marker present only on the cells of interest whereas for negative selection we use cocktails of antibodies targeted to surface markers present on all cells but the cells of interest. This latter technique presents the advantage of leaving the cells of interest free of antibodies, thereby reducing the risk of unwanted cell activation or differenciation. In this video-protocol we demonstrate how to separate NK cells from human blood by negative selection, using the RosetteSep kit from StemCell technologies. The procedure involves obtaining human peripheral blood (under an institutional review board-approved protocol to protect the human subjects) and mixing it with a cocktail of antibodies that will bind to markers absent on NK cells, but present on all other mononuclear cells present in peripheral blood (e.g., T lymphocytes, monocytes...). The antibodies present in the cocktail are conjugated to antibodies directed to glycophorin A on erythrocytes. All unwanted cells and red blood cells will therefore be trapped in complexes. The mix of blood and antibody cocktail is then diluted, overlayed on a Histopaque gradient, and centrifuged. NK cells (>80% pure) can be collected at the interface between the Histopaque and the diluted plasma. Similar cocktails are available for enrichment of other cell populations, such as human T lymphocytes.
Immunology, issue 8, blood, cell isolation, natural killer, lymphocyte, primary cells, negative selection, PBMC, Ficoll gradient, cell separation
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.