JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Robustness analysis of stochastic biochemical systems.
PLoS ONE
PUBLISHED: 01-01-2014
We propose a new framework for rigorous robustness analysis of stochastic biochemical systems that is based on probabilistic model checking techniques. We adapt the general definition of robustness introduced by Kitano to the class of stochastic systems modelled as continuous time Markov Chains in order to extensively analyse and compare robustness of biological models with uncertain parameters. The framework utilises novel computational methods that enable to effectively evaluate the robustness of models with respect to quantitative temporal properties and parameters such as reaction rate constants and initial conditions. We have applied the framework to gene regulation as an example of a central biological mechanism where intrinsic and extrinsic stochasticity plays crucial role due to low numbers of DNA and RNA molecules. Using our methods we have obtained a comprehensive and precise analysis of stochastic dynamics under parameter uncertainty. Furthermore, we apply our framework to compare several variants of two-component signalling networks from the perspective of robustness with respect to intrinsic noise caused by low populations of signalling components. We have successfully extended previous studies performed on deterministic models (ODE) and showed that stochasticity may significantly affect obtained predictions. Our case studies demonstrate that the framework can provide deeper insight into the role of key parameters in maintaining the system functionality and thus it significantly contributes to formal methods in computational systems biology.
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Published: 07-25-2013
ABSTRACT
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
26 Related JoVE Articles!
Play Button
Measuring the Kinetics of mRNA Transcription in Single Living Cells
Authors: Yehuda Brody, Yaron Shav-Tal.
Institutions: Bar-Ilan University.
The transcriptional activity of RNA polymerase II (Pol II) is a dynamic process and therefore measuring the kinetics of the transcriptional process in vivo is of importance. Pol II kinetics have been measured using biochemical or molecular methods.1-3 In recent years, with the development of new visualization methods, it has become possible to follow transcription as it occurs in real time in single living cells.4 Herein we describe how to perform analysis of Pol II elongation kinetics on a specific gene in living cells.5, 6 Using a cell line in which a specific gene locus (DNA), its mRNA product, and the final protein product can be fluorescently labeled and visualized in vivo, it is possible to detect the actual transcription of mRNAs on the gene of interest.7, 8 The mRNA is fluorescently tagged using the MS2 system for tagging mRNAs in vivo, where the 3'UTR of the mRNA transcripts contain 24 MS2 stem-loop repeats, which provide highly specific binding sites for the YFP-MS2 coat protein that labels the mRNA as it is transcribed.9 To monitor the kinetics of transcription we use the Fluorescence Recovery After Photobleaching (FRAP) method. By photobleaching the YFP-MS2-tagged nascent transcripts at the site of transcription and then following the recovery of this signal over time, we obtain the synthesis rate of the newly made mRNAs.5 In other words, YFP-MS2 fluorescence recovery reflects the generation of new MS2 stem-loops in the nascent transcripts and their binding by fluorescent free YFP-MS2 molecules entering from the surrounding nucleoplasm. The FRAP recovery curves are then analyzed using mathematical mechanistic models formalized by a series of differential equations, in order to retrieve the kinetic time parameters of transcription.
Cell Biology, Issue 54, mRNA transcription, nucleus, live-cell imaging, cellular dynamics, FRAP
2898
Play Button
A Comparative Approach to Characterize the Landscape of Host-Pathogen Protein-Protein Interactions
Authors: Mandy Muller, Patricia Cassonnet, Michel Favre, Yves Jacob, Caroline Demeret.
Institutions: Institut Pasteur , Université Sorbonne Paris Cité, Dana Farber Cancer Institute.
Significant efforts were gathered to generate large-scale comprehensive protein-protein interaction network maps. This is instrumental to understand the pathogen-host relationships and was essentially performed by genetic screenings in yeast two-hybrid systems. The recent improvement of protein-protein interaction detection by a Gaussia luciferase-based fragment complementation assay now offers the opportunity to develop integrative comparative interactomic approaches necessary to rigorously compare interaction profiles of proteins from different pathogen strain variants against a common set of cellular factors. This paper specifically focuses on the utility of combining two orthogonal methods to generate protein-protein interaction datasets: yeast two-hybrid (Y2H) and a new assay, high-throughput Gaussia princeps protein complementation assay (HT-GPCA) performed in mammalian cells. A large-scale identification of cellular partners of a pathogen protein is performed by mating-based yeast two-hybrid screenings of cDNA libraries using multiple pathogen strain variants. A subset of interacting partners selected on a high-confidence statistical scoring is further validated in mammalian cells for pair-wise interactions with the whole set of pathogen variants proteins using HT-GPCA. This combination of two complementary methods improves the robustness of the interaction dataset, and allows the performance of a stringent comparative interaction analysis. Such comparative interactomics constitute a reliable and powerful strategy to decipher any pathogen-host interplays.
Immunology, Issue 77, Genetics, Microbiology, Biochemistry, Molecular Biology, Cellular Biology, Biomedical Engineering, Infection, Cancer Biology, Virology, Medicine, Host-Pathogen Interactions, Host-Pathogen Interactions, Protein-protein interaction, High-throughput screening, Luminescence, Yeast two-hybrid, HT-GPCA, Network, protein, yeast, cell, culture
50404
Play Button
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Authors: Amir Karniel, Guy Avraham, Bat-Chen Peles, Shelly Levy-Tzedek, Ilana Nisky.
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
2492
Play Button
Wideband Optical Detector of Ultrasound for Medical Imaging Applications
Authors: Amir Rosenthal, Stephan Kellnberger, Murad Omar, Daniel Razansky, Vasilis Ntziachristos.
Institutions: Technical University of Munich and Helmholtz Center Munich.
Optical sensors of ultrasound are a promising alternative to piezoelectric techniques, as has been recently demonstrated in the field of optoacoustic imaging. In medical applications, one of the major limitations of optical sensing technology is its susceptibility to environmental conditions, e.g. changes in pressure and temperature, which may saturate the detection. Additionally, the clinical environment often imposes stringent limits on the size and robustness of the sensor. In this work, the combination of pulse interferometry and fiber-based optical sensing is demonstrated for ultrasound detection. Pulse interferometry enables robust performance of the readout system in the presence of rapid variations in the environmental conditions, whereas the use of all-fiber technology leads to a mechanically flexible sensing element compatible with highly demanding medical applications such as intravascular imaging. In order to achieve a short sensor length, a pi-phase-shifted fiber Bragg grating is used, which acts as a resonator trapping light over an effective length of 350 µm. To enable high bandwidth, the sensor is used for sideway detection of ultrasound, which is highly beneficial in circumferential imaging geometries such as intravascular imaging. An optoacoustic imaging setup is used to determine the response of the sensor for acoustic point sources at different positions.
Bioengineering, Issue 87, Ultrasound, optical sensors, interferometry, pulse interferometry, optical fibers, fiber Bragg gratings, optoacoustic imaging, photoacoustic imaging
50847
Play Button
Isolation and Culture of Neonatal Mouse Cardiomyocytes
Authors: Elisabeth Ehler, Thomas Moore-Morris, Stephan Lange.
Institutions: King’s College London, University of California San Diego .
Cultured neonatal cardiomyocytes have long been used to study myofibrillogenesis and myofibrillar functions. Cultured cardiomyocytes allow for easy investigation and manipulation of biochemical pathways, and their effect on the biomechanical properties of spontaneously beating cardiomyocytes. The following 2-day protocol describes the isolation and culture of neonatal mouse cardiomyocytes. We show how to easily dissect hearts from neonates, dissociate the cardiac tissue and enrich cardiomyocytes from the cardiac cell-population. We discuss the usage of different enzyme mixes for cell-dissociation, and their effects on cell-viability. The isolated cardiomyocytes can be subsequently used for a variety of morphological, electrophysiological, biochemical, cell-biological or biomechanical assays. We optimized the protocol for robustness and reproducibility, by using only commercially available solutions and enzyme mixes that show little lot-to-lot variability. We also address common problems associated with the isolation and culture of cardiomyocytes, and offer a variety of options for the optimization of isolation and culture conditions.
Cellular Biology, Issue 79, Biomedical Engineering, Bioengineering, Molecular Biology, Cell Culture Techniques, Primary Cell Culture, Cell Culture Techniques, Primary Cell Culture, Cell Culture Techniques, Primary Cell Culture, Cell Culture Techniques, Disease Models, Animal, Models, Cardiovascular, Cell Biology, neonatal mouse, cardiomyocytes, isolation, culture, primary cells, NMC, heart cells, animal model
50154
Play Button
Single Cell Transcriptional Profiling of Adult Mouse Cardiomyocytes
Authors: James M. Flynn, Luis F. Santana, Simon Melov.
Institutions: Buck Institute for Research on Aging, University of Washington.
While numerous studies have examined gene expression changes from homogenates of heart tissue, this prevents studying the inherent stochastic variation between cells within a tissue. Isolation of pure cardiomyocyte populations through a collagenase perfusion of mouse hearts facilitates the generation of single cell microarrays for whole transcriptome gene expression, or qPCR of specific targets using nanofluidic arrays. We describe here a procedure to examine single cell gene expression profiles of cardiomyocytes isolated from the heart. This paradigm allows for the evaluation of metrics of interest which are not reliant on the mean (for example variance between cells within a tissue) which is not possible when using conventional whole tissue workflows for the evaluation of gene expression (Figure 1). We have achieved robust amplification of the single cell transcriptome yielding micrograms of double stranded cDNA that facilitates the use of microarrays on individual cells. In the procedure we describe the use of NimbleGen arrays which were selected for their ease of use and ability to customize their design. Alternatively, a reverse transcriptase - specific target amplification (RT-STA) reaction, allows for qPCR of hundreds of targets by nanofluidic PCR. Using either of these approaches, it is possible to examine the variability of expression between cells, as well as examining expression profiles of rare cell types from within a tissue. Overall, the single cell gene expression approach allows for the generation of data that can potentially identify idiosyncratic expression profiles that are typically averaged out when examining expression of millions of cells from typical homogenates generated from whole tissues.
Molecular Biology, Issue 58, Single cell analysis, Microarray, Gene expression, Cardiomyocyte, Mouse heart perfusion, mice, qPCR
3302
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
Regioselective Biolistic Targeting in Organotypic Brain Slices Using a Modified Gene Gun
Authors: Jason Arsenault, Andras Nagy, Jeffrey T. Henderson, John A. O'Brien.
Institutions: University of Toronto, MRC-Laboratory of Molecular Biology, Cambridge, UK.
Transfection of DNA has been invaluable for biological sciences and with recent advances to organotypic brain slice preparations, the effect of various heterologous genes could thus be investigated easily while maintaining many aspects of in vivo biology. There has been increasing interest to transfect terminally differentiated neurons for which conventional transfection methods have been fraught with difficulties such as low yields and significant losses in viability. Biolistic transfection can circumvent many of these difficulties yet only recently has this technique been modified so that it is amenable for use in mammalian tissues. New modifications to the accelerator chamber have enhanced the gene gun's firing accuracy and increased its depths of penetration while also allowing the use of lower gas pressure (50 psi) without loss of transfection efficiency as well as permitting a focused regioselective spread of the particles to within 3 mm. In addition, this technique is straight forward and faster to perform than tedious microinjections. Both transient and stable expression are possible with nanoparticle bombardment where episomal expression can be detected within 24 hr and the cell survival was shown to be better than, or at least equal to, conventional methods. This technique has however one crucial advantage: it permits the transfection to be localized within a single restrained radius thus enabling the user to anatomically isolate the heterologous gene's effects. Here we present an in-depth protocol to prepare viable adult organotypic slices and submit them to regioselective transfection using an improved gene gun.
Neuroscience, Issue 92, Biolistics, gene gun, organotypic brain slices, Diolistic, gene delivery, staining
52148
Play Button
Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice
Authors: Robert S. McNeill, Ralf S. Schmid, Ryan E. Bash, Mark Vitucci, Kristen K. White, Andrea M. Werneke, Brian H. Constance, Byron Huff, C. Ryan Miller.
Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine.
Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro. Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo. Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro and in vivo and may be useful in preclinical drug development for these devastating diseases.
Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft
51763
Play Button
Stress-induced Antibiotic Susceptibility Testing on a Chip
Authors: Maxim Kalashnikov, Jennifer Campbell, Jean C. Lee, Andre Sharon, Alexis F. Sauer-Budge.
Institutions: Fraunhofer USA Center for Manufacturing Innovation, Harvard Medical School, Boston University, Boston University.
We have developed a rapid microfluidic method for antibiotic susceptibility testing in a stress-based environment. Fluid is passed at high speeds over bacteria immobilized on the bottom of a microfluidic channel. In the presence of stress and antibiotic, susceptible strains of bacteria die rapidly. However, resistant bacteria survive these stressful conditions. The hypothesis behind this method is new: stress activation of biochemical pathways, which are targets of antibiotics, can accelerate antibiotic susceptibility testing. As compared to standard antibiotic susceptibility testing methods, the rate-limiting step - bacterial growth - is omitted during antibiotic application. The technical implementation of the method is in a combination of standard techniques and innovative approaches. The standard parts of the method include bacterial culture protocols, defining microfluidic channels in polydimethylsiloxane (PDMS), cell viability monitoring with fluorescence, and batch image processing for bacteria counting. Innovative parts of the method are in the use of culture media flow for mechanical stress application, use of enzymes to damage but not kill the bacteria, and use of microarray substrates for bacterial attachment. The developed platform can be used in antibiotic and nonantibiotic related drug development and testing. As compared to the standard bacterial suspension experiments, the effect of the drug can be turned on and off repeatedly over controlled time periods. Repetitive observation of the same bacterial population is possible over the course of the same experiment.
Bioengineering, Issue 83, antibiotic, susceptibility, resistance, microfluidics, microscopy, rapid, testing, stress, bacteria, fluorescence
50828
Play Button
Monitoring Cell-autonomous Circadian Clock Rhythms of Gene Expression Using Luciferase Bioluminescence Reporters
Authors: Chidambaram Ramanathan, Sanjoy K. Khan, Nimish D. Kathale, Haiyan Xu, Andrew C. Liu.
Institutions: The University of Memphis.
In mammals, many aspects of behavior and physiology such as sleep-wake cycles and liver metabolism are regulated by endogenous circadian clocks (reviewed1,2). The circadian time-keeping system is a hierarchical multi-oscillator network, with the central clock located in the suprachiasmatic nucleus (SCN) synchronizing and coordinating extra-SCN and peripheral clocks elsewhere1,2. Individual cells are the functional units for generation and maintenance of circadian rhythms3,4, and these oscillators of different tissue types in the organism share a remarkably similar biochemical negative feedback mechanism. However, due to interactions at the neuronal network level in the SCN and through rhythmic, systemic cues at the organismal level, circadian rhythms at the organismal level are not necessarily cell-autonomous5-7. Compared to traditional studies of locomotor activity in vivo and SCN explants ex vivo, cell-based in vitro assays allow for discovery of cell-autonomous circadian defects5,8. Strategically, cell-based models are more experimentally tractable for phenotypic characterization and rapid discovery of basic clock mechanisms5,8-13. Because circadian rhythms are dynamic, longitudinal measurements with high temporal resolution are needed to assess clock function. In recent years, real-time bioluminescence recording using firefly luciferase as a reporter has become a common technique for studying circadian rhythms in mammals14,15, as it allows for examination of the persistence and dynamics of molecular rhythms. To monitor cell-autonomous circadian rhythms of gene expression, luciferase reporters can be introduced into cells via transient transfection13,16,17 or stable transduction5,10,18,19. Here we describe a stable transduction protocol using lentivirus-mediated gene delivery. The lentiviral vector system is superior to traditional methods such as transient transfection and germline transmission because of its efficiency and versatility: it permits efficient delivery and stable integration into the host genome of both dividing and non-dividing cells20. Once a reporter cell line is established, the dynamics of clock function can be examined through bioluminescence recording. We first describe the generation of P(Per2)-dLuc reporter lines, and then present data from this and other circadian reporters. In these assays, 3T3 mouse fibroblasts and U2OS human osteosarcoma cells are used as cellular models. We also discuss various ways of using these clock models in circadian studies. Methods described here can be applied to a great variety of cell types to study the cellular and molecular basis of circadian clocks, and may prove useful in tackling problems in other biological systems.
Genetics, Issue 67, Molecular Biology, Cellular Biology, Chemical Biology, Circadian clock, firefly luciferase, real-time bioluminescence technology, cell-autonomous model, lentiviral vector, RNA interference (RNAi), high-throughput screening (HTS)
4234
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
50131
Play Button
Longitudinal Measurement of Extracellular Matrix Rigidity in 3D Tumor Models Using Particle-tracking Microrheology
Authors: Dustin P. Jones, William Hanna, Hamid El-Hamidi, Jonathan P. Celli.
Institutions: University of Massachusetts Boston.
The mechanical microenvironment has been shown to act as a crucial regulator of tumor growth behavior and signaling, which is itself remodeled and modified as part of a set of complex, two-way mechanosensitive interactions. While the development of biologically-relevant 3D tumor models have facilitated mechanistic studies on the impact of matrix rheology on tumor growth, the inverse problem of mapping changes in the mechanical environment induced by tumors remains challenging. Here, we describe the implementation of particle-tracking microrheology (PTM) in conjunction with 3D models of pancreatic cancer as part of a robust and viable approach for longitudinally monitoring physical changes in the tumor microenvironment, in situ. The methodology described here integrates a system of preparing in vitro 3D models embedded in a model extracellular matrix (ECM) scaffold of Type I collagen with fluorescently labeled probes uniformly distributed for position- and time-dependent microrheology measurements throughout the specimen. In vitro tumors are plated and probed in parallel conditions using multiwell imaging plates. Drawing on established methods, videos of tracer probe movements are transformed via the Generalized Stokes Einstein Relation (GSER) to report the complex frequency-dependent viscoelastic shear modulus, G*(ω). Because this approach is imaging-based, mechanical characterization is also mapped onto large transmitted-light spatial fields to simultaneously report qualitative changes in 3D tumor size and phenotype. Representative results showing contrasting mechanical response in sub-regions associated with localized invasion-induced matrix degradation as well as system calibration, validation data are presented. Undesirable outcomes from common experimental errors and troubleshooting of these issues are also presented. The 96-well 3D culture plating format implemented in this protocol is conducive to correlation of microrheology measurements with therapeutic screening assays or molecular imaging to gain new insights into impact of treatments or biochemical stimuli on the mechanical microenvironment.
Bioengineering, Issue 88, viscoelasticity, mechanobiology, extracellular matrix (ECM), matrix remodeling, 3D tumor models, tumor microenvironment, stroma, matrix metalloprotease (MMP), epithelial-mesenchymal transition (EMT)
51302
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky.
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
51278
Play Button
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Authors: Daniel E. Bradford, Katherine P. Magruder, Rachel A. Korhumel, John J. Curtin.
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e., “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g., neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
51905
Play Button
Comprehensive Analysis of Transcription Dynamics from Brain Samples Following Behavioral Experience
Authors: Hagit Turm, Diptendu Mukherjee, Doron Haritan, Maayan Tahor, Ami Citri.
Institutions: The Hebrew University of Jerusalem.
The encoding of experiences in the brain and the consolidation of long-term memories depend on gene transcription. Identifying the function of specific genes in encoding experience is one of the main objectives of molecular neuroscience. Furthermore, the functional association of defined genes with specific behaviors has implications for understanding the basis of neuropsychiatric disorders. Induction of robust transcription programs has been observed in the brains of mice following various behavioral manipulations. While some genetic elements are utilized recurrently following different behavioral manipulations and in different brain nuclei, transcriptional programs are overall unique to the inducing stimuli and the structure in which they are studied1,2. In this publication, a protocol is described for robust and comprehensive transcriptional profiling from brain nuclei of mice in response to behavioral manipulation. The protocol is demonstrated in the context of analysis of gene expression dynamics in the nucleus accumbens following acute cocaine experience. Subsequent to a defined in vivo experience, the target neural tissue is dissected; followed by RNA purification, reverse transcription and utilization of microfluidic arrays for comprehensive qPCR analysis of multiple target genes. This protocol is geared towards comprehensive analysis (addressing 50-500 genes) of limiting quantities of starting material, such as small brain samples or even single cells. The protocol is most advantageous for parallel analysis of multiple samples (e.g. single cells, dynamic analysis following pharmaceutical, viral or behavioral perturbations). However, the protocol could also serve for the characterization and quality assurance of samples prior to whole-genome studies by microarrays or RNAseq, as well as validation of data obtained from whole-genome studies.
Behavior, Issue 90, Brain, behavior, RNA, transcription, nucleus accumbens, cocaine, high-throughput qPCR, experience-dependent plasticity, gene regulatory networks, microdissection
51642
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Authors: John Marshall, Koji Morikawa, Nicholas Manoukis, Charles Taylor.
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
227
Play Button
Reaggregate Thymus Cultures
Authors: Andrea White, Eric Jenkinson, Graham Anderson.
Institutions: University of Birmingham .
Stromal cells within lymphoid tissues are organized into three-dimensional structures that provide a scaffold that is thought to control the migration and development of haemopoeitic cells. Importantly, the maintenance of this three-dimensional organization appears to be critical for normal stromal cell function, with two-dimensional monolayer cultures often being shown to be capable of supporting only individual fragments of lymphoid tissue function. In the thymus, complex networks of cortical and medullary epithelial cells act as a framework that controls the recruitment, proliferation, differentiation and survival of lymphoid progenitors as they undergo the multi-stage process of intrathymic T-cell development. Understanding the functional role of individual stromal compartments in the thymus is essential in determining how the thymus imposes self/non-self discrimination. Here we describe a technique in which we exploit the plasticity of fetal tissues to re-associate into intact three-dimensional structures in vitro, following their enzymatic disaggregation. The dissociation of fetal thymus lobes into heterogeneous cellular mixtures, followed by their separation into individual cellular components, is then combined with the in vitro re-association of these desired cell types into three-dimensional reaggregate structures at defined ratios, thereby providing an opportunity to investigate particular aspects of T-cell development under defined cellular conditions. (This article is based on work first reported Methods in Molecular Biology 2007, Vol. 380 pages 185-196).
Immunology, Issue 18, Springer Protocols, Thymus, 2-dGuo, Thymus Organ Cultures, Immune Tolerance, Positive and Negative Selection, Lymphoid Development
905
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.