JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
A Short Guide to the Climatic Variables of the Last Glacial Maximum for Biogeographers.
PUBLISHED: 06-13-2015
Ecological niche models are widely used for mapping the distribution of species during the last glacial maximum (LGM). Although the selection of the variables and General Circulation Models (GCMs) used for constructing those maps determine the model predictions, we still lack a discussion about which variables and which GCM should be included in the analysis and why. Here, we analyzed the climatic predictions for the LGM of 9 different GCMs in order to help biogeographers to select their GCMs and climatic layers for mapping the species ranges in the LGM. We 1) map the discrepancies between the climatic predictions of the nine GCMs available for the LGM, 2) analyze the similarities and differences between the GCMs and group them to help researchers choose the appropriate GCMs for calibrating and projecting their ecological niche models (ENM) during the LGM, and 3) quantify the agreement of the predictions for each bioclimatic variable to help researchers avoid the environmental variables with a poor consensus between models. Our results indicate that, in absolute values, GCMs have a strong disagreement in their temperature predictions for temperate areas, while the uncertainties for the precipitation variables are in the tropics. In spite of the discrepancies between model predictions, temperature variables (BIO1-BIO11) are highly correlated between models. Precipitation variables (BIO12- BIO19) show no correlation between models, and specifically, BIO14 (precipitation of the driest month) and BIO15 (Precipitation Seasonality (Coefficient of Variation)) show the highest level of discrepancy between GCMs. Following our results, we strongly recommend the use of different GCMs for constructing or projecting ENMs, particularly when predicting the distribution of species that inhabit the tropics and the temperate areas of the Northern and Southern Hemispheres, because climatic predictions for those areas vary greatly among GCMs. We also recommend the exclusion of BIO14 and BIO15 from ENMs because those variables show a high level of discrepancy between GCMs. Thus, by excluding them, we decrease the level of uncertainty of our predictions. All the climatic layers produced for this paper are freely available in
Authors: Thomas Z. Thompson, Farres Obeidin, Alisa A. Davidoff, Cody L. Hightower, Christohper Z. Johnson, Sonya L. Rice, Rebecca-Lyn Sokolove, Brandon K. Taylor, John M. Tuck, William G. Pearson, Jr..
Published: 05-06-2014
Characterizing hyolaryngeal movement is important to dysphagia research. Prior methods require multiple measurements to obtain one kinematic measurement whereas coordinate mapping of hyolaryngeal mechanics using Modified Barium Swallow (MBS) uses one set of coordinates to calculate multiple variables of interest. For demonstration purposes, ten kinematic measurements were generated from one set of coordinates to determine differences in swallowing two different bolus types. Calculations of hyoid excursion against the vertebrae and mandible are correlated to determine the importance of axes of reference. To demonstrate coordinate mapping methodology, 40 MBS studies were randomly selected from a dataset of healthy normal subjects with no known swallowing impairment. A 5 ml thin-liquid bolus and a 5 ml pudding swallows were measured from each subject. Nine coordinates, mapping the cranial base, mandible, vertebrae and elements of the hyolaryngeal complex, were recorded at the frames of minimum and maximum hyolaryngeal excursion. Coordinates were mathematically converted into ten variables of hyolaryngeal mechanics. Inter-rater reliability was evaluated by Intraclass correlation coefficients (ICC). Two-tailed t-tests were used to evaluate differences in kinematics by bolus viscosity. Hyoid excursion measurements against different axes of reference were correlated. Inter-rater reliability among six raters for the 18 coordinates ranged from ICC = 0.90 - 0.97. A slate of ten kinematic measurements was compared by subject between the six raters. One outlier was rejected, and the mean of the remaining reliability scores was ICC = 0.91, 0.84 - 0.96, 95% CI. Two-tailed t-tests with Bonferroni corrections comparing ten kinematic variables (5 ml thin-liquid vs. 5 ml pudding swallows) showed statistically significant differences in hyoid excursion, superior laryngeal movement, and pharyngeal shortening (p < 0.005). Pearson correlations of hyoid excursion measurements from two different axes of reference were: r = 0.62, r2 = 0.38, (thin-liquid); r = 0.52, r2 = 0.27, (pudding). Obtaining landmark coordinates is a reliable method to generate multiple kinematic variables from video fluoroscopic images useful in dysphagia research.
24 Related JoVE Articles!
Play Button
Light/dark Transition Test for Mice
Authors: Keizo Takao, Tsuyoshi Miyakawa.
Institutions: Graduate School of Medicine, Kyoto University.
Although all of the mouse genome sequences have been determined, we do not yet know the functions of most of these genes. Gene-targeting techniques, however, can be used to delete or manipulate a specific gene in mice. The influence of a given gene on a specific behavior can then be determined by conducting behavioral analyses of the mutant mice. As a test for behavioral phenotyping of mutant mice, the light/dark transition test is one of the most widely used tests to measure anxiety-like behavior in mice. The test is based on the natural aversion of mice to brightly illuminated areas and on their spontaneous exploratory behavior in novel environments. The test is sensitive to anxiolytic drug treatment. The apparatus consists of a dark chamber and a brightly illuminated chamber. Mice are allowed to move freely between the two chambers. The number of entries into the bright chamber and the duration of time spent there are indices of bright-space anxiety in mice. To obtain phenotyping results of a strain of mutant mice that can be readily reproduced and compared with those of other mutants, the behavioral test methods should be as identical as possible between laboratories. The procedural differences that exist between laboratories, however, make it difficult to replicate or compare the results among laboratories. Here, we present our protocol for the light/dark transition test as a movie so that the details of the protocol can be demonstrated. In our laboratory, we have assessed more than 60 strains of mutant mice using the protocol shown in the movie. Those data will be disclosed as a part of a public database that we are now constructing. Visualization of the protocol will facilitate understanding of the details of the entire experimental procedure, allowing for standardization of the protocols used across laboratories and comparisons of the behavioral phenotypes of various strains of mutant mice assessed using this test.
Neuroscience, Issue 1, knockout mice, transgenic mice, behavioral test, phenotyping
Play Button
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli.
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (, our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
Play Button
Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity
Authors: Angela J. Brandt, Gaston A. del Pino, Jean H. Burns.
Institutions: Case Western Reserve University.
Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition.
Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch
Play Button
Measuring the Osmotic Water Permeability Coefficient (Pf) of Spherical Cells: Isolated Plant Protoplasts as an Example
Authors: Arava Shatil-Cohen, Hadas Sibony, Xavier Draye, François Chaumont, Nava Moran, Menachem Moshelion.
Institutions: The Hebrew University of Jerusalem, Université catholique de Louvain, Université catholique de Louvain.
Studying AQP regulation mechanisms is crucial for the understanding of water relations at both the cellular and the whole plant levels. Presented here is a simple and very efficient method for the determination of the osmotic water permeability coefficient (Pf) in plant protoplasts, applicable in principle also to other spherical cells such as frog oocytes. The first step of the assay is the isolation of protoplasts from the plant tissue of interest by enzymatic digestion into a chamber with an appropriate isotonic solution. The second step consists of an osmotic challenge assay: protoplasts immobilized on the bottom of the chamber are submitted to a constant perfusion starting with an isotonic solution and followed by a hypotonic solution. The cell swelling is video recorded. In the third step, the images are processed offline to yield volume changes, and the time course of the volume changes is correlated with the time course of the change in osmolarity of the chamber perfusion medium, using a curve fitting procedure written in Matlab (the ‘PfFit’), to yield Pf.
Plant Biology, Issue 92, Osmotic water permeability coefficient, aquaporins, protoplasts, curve fitting, non-instantaneous osmolarity change, volume change time course
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Play Button
Measuring Neural and Behavioral Activity During Ongoing Computerized Social Interactions: An Examination of Event-Related Brain Potentials
Authors: Jason R. Themanson.
Institutions: Illinois Wesleyan University.
Social exclusion is a complex social phenomenon with powerful negative consequences. Given the impact of social exclusion on mental and emotional health, an understanding of how perceptions of social exclusion develop over the course of a social interaction is important for advancing treatments aimed at lessening the harmful costs of being excluded. To date, most scientific examinations of social exclusion have looked at exclusion after a social interaction has been completed. While this has been very helpful in developing an understanding of what happens to a person following exclusion, it has not helped to clarify the moment-to-moment dynamics of the process of social exclusion. Accordingly, the current protocol was developed to obtain an improved understanding of social exclusion by examining the patterns of event-related brain activation that are present during social interactions. This protocol allows greater precision and sensitivity in detailing the social processes that lead people to feel as though they have been excluded from a social interaction. Importantly, the current protocol can be adapted to include research projects that vary the nature of exclusionary social interactions by altering how frequently participants are included, how long the periods of exclusion will last in each interaction, and when exclusion will take place during the social interactions. Further, the current protocol can be used to examine variables and constructs beyond those related to social exclusion. This capability to address a variety of applications across psychology by obtaining both neural and behavioral data during ongoing social interactions suggests the present protocol could be at the core of a developing area of scientific inquiry related to social interactions.
Behavior, Issue 93, Event-related brain potentials (ERPs), Social Exclusion, Neuroscience, N2, P3, Cognitive Control
Play Button
Quantification of Orofacial Phenotypes in Xenopus
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Institutions: Virginia Commonwealth University.
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
Developmental Biology, Issue 93, Orofacial quantification, geometric morphometrics, Xenopus, orofacial development, orofacial defects, shape changes, facial dimensions
Play Button
Measurement of the Pressure-volume Curve in Mouse Lungs
Authors: Nathachit Limjunyawong, Jonathan Fallica, Maureen R. Horton, Wayne Mitzner.
Institutions: Johns Hopkins University.
In recent decades the mouse has become the primary animal model of a variety of lung diseases. In models of emphysema or fibrosis, the essential phenotypic changes are best assessed by measurement of the changes in lung elasticity. To best understand specific mechanisms underlying such pathologies in mice, it is essential to make functional measurements that can reflect the developing pathology. Although there are many ways to measure elasticity, the classical method is that of the total lung pressure-volume (PV) curve done over the whole range of lung volumes. This measurement has been made on adult lungs from nearly all mammalian species dating back almost 100 years, and such PV curves also played a major role in the discovery and understanding of the function of pulmonary surfactant in fetal lung development. Unfortunately, such total PV curves have not been widely reported in the mouse, despite the fact that they can provide useful information on the macroscopic effects of structural changes in the lung. Although partial PV curves measuring just the changes in lung volume are sometimes reported, without a measure of absolute volume, the nonlinear nature of the total PV curve makes these partial ones very difficult to interpret. In the present study, we describe a standardized way to measure the total PV curve. We have then tested the ability of these curves to detect changes in mouse lung structure in two common lung pathologies, emphysema and fibrosis. Results showed significant changes in several variables consistent with expected structural changes with these pathologies. This measurement of the lung PV curve in mice thus provides a straightforward means to monitor the progression of the pathophysiologic changes over time and the potential effect of therapeutic procedures.
Medicine, Issue 95, Lung compliance, Lung hysteresis, Pulmonary surfactant, Lung elasticity, Quasistatic compliance, Fibrosis, Emphysema
Play Button
Development of a Quantitative Recombinase Polymerase Amplification Assay with an Internal Positive Control
Authors: Zachary A. Crannell, Brittany Rohrman, Rebecca Richards-Kortum.
Institutions: Rice University.
It was recently demonstrated that recombinase polymerase amplification (RPA), an isothermal amplification platform for pathogen detection, may be used to quantify DNA sample concentration using a standard curve. In this manuscript, a detailed protocol for developing and implementing a real-time quantitative recombinase polymerase amplification assay (qRPA assay) is provided. Using HIV-1 DNA quantification as an example, the assembly of real-time RPA reactions, the design of an internal positive control (IPC) sequence, and co-amplification of the IPC and target of interest are all described. Instructions and data processing scripts for the construction of a standard curve using data from multiple experiments are provided, which may be used to predict the concentration of unknown samples or assess the performance of the assay. Finally, an alternative method for collecting real-time fluorescence data with a microscope and a stage heater as a step towards developing a point-of-care qRPA assay is described. The protocol and scripts provided may be used for the development of a qRPA assay for any DNA target of interest.
Genetics, Issue 97, recombinase polymerase amplification, isothermal amplification, quantitative, diagnostic, HIV-1, viral load
Play Button
Studying Food Reward and Motivation in Humans
Authors: Hisham Ziauddeen, Naresh Subramaniam, Victoria C. Cambridge, Nenad Medic, Ismaa Sadaf Farooqi, Paul C. Fletcher.
Institutions: University of Cambridge, University of Cambridge, University of Cambridge, Addenbrooke's Hospital.
A key challenge in studying reward processing in humans is to go beyond subjective self-report measures and quantify different aspects of reward such as hedonics, motivation, and goal value in more objective ways. This is particularly relevant for the understanding of overeating and obesity as well as their potential treatments. In this paper are described a set of measures of food-related motivation using handgrip force as a motivational measure. These methods can be used to examine changes in food related motivation with metabolic (satiety) and pharmacological manipulations and can be used to evaluate interventions targeted at overeating and obesity. However to understand food-related decision making in the complex food environment it is essential to be able to ascertain the reward goal values that guide the decisions and behavioral choices that people make. These values are hidden but it is possible to ascertain them more objectively using metrics such as the willingness to pay and a method for this is described. Both these sets of methods provide quantitative measures of motivation and goal value that can be compared within and between individuals.
Behavior, Issue 85, Food reward, motivation, grip force, willingness to pay, subliminal motivation
Play Button
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky.
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Play Button
Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro
Authors: Aya D. Pusic, Yelena Y. Grinberg, Heidi M. Mitchell, Richard P. Kraig.
Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center.
Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic and modulate SD immune signaling.
Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma
Play Button
Biochemical Measurement of Neonatal Hypoxia
Authors: Megan S. Plank, Teleka C. Calderon, Yayesh Asmerom, Danilo S. Boskovic, Danilyn M. Angeles.
Institutions: Loma Linda University, Loma Linda University.
Neonatal hypoxia ischemia is characterized by inadequate blood perfusion of a tissue or a systemic lack of oxygen. This condition is thought to cause/exacerbate well documented neonatal disorders including neurological impairment 1-3. Decreased adenosine triphosphate production occurs due to a lack of oxidative phosphorylation. To compensate for this energy deprived state molecules containing high energy phosphate bonds are degraded 2. This leads to increased levels of adenosine which is subsequently degraded to inosine, hypoxanthine, xanthine, and finally to uric acid. The final two steps in this degradation process are performed by xanthine oxidoreductase. This enzyme exists in the form of xanthine dehydrogenase under normoxic conditions but is converted to xanthine oxidase (XO) under hypoxia-reperfusion circumstances 4, 5. Unlike xanthine dehydrogenase, XO generates hydrogen peroxide as a byproduct of purine degradation 4, 6. This hydrogen peroxide in combination with other reactive oxygen species (ROS) produced during hypoxia, oxidizes uric acid to form allantoin and reacts with lipid membranes to generate malondialdehyde (MDA) 7-9. Most mammals, humans exempted, possess the enzyme uricase, which converts uric acid to allantoin. In humans, however, allantoin can only be formed by ROS-mediated oxidation of uric acid. Because of this, allantoin is considered to be a marker of oxidative stress in humans, but not in the mammals that have uricase. We describe methods employing high pressure liquid chromatography (HPLC) and gas chromatography mass spectrometry (GCMS) to measure biochemical markers of neonatal hypoxia ischemia. Human blood is used for most tests. Animal blood may also be used while recognizing the potential for uricase-generated allantoin. Purine metabolites were linked to hypoxia as early as 1963 and the reliability of hypoxanthine, xanthine, and uric acid as biochemical indicators of neonatal hypoxia was validated by several investigators 10-13. The HPLC method used for the quantification of purine compounds is fast, reliable, and reproducible. The GC/MS method used for the quantification of allantoin, a relatively new marker of oxidative stress, was adapted from Gruber et al 7. This method avoids certain artifacts and requires low volumes of sample. Methods used for synthesis of MMDA were described elsewhere 14, 15. GC/MS based quantification of MDA was adapted from Paroni et al. and Cighetti et al. 16, 17. Xanthine oxidase activity was measured by HPLC by quantifying the conversion of pterin to isoxanthopterin 18. This approach proved to be sufficiently sensitive and reproducible.
Medicine, Issue 54, hypoxia, Ischemia, Neonate, Hypoxanthine, Xanthine, Uric Acid, Allantoin, Xanthine Oxidase, Malondialdehyde
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Play Button
Preparation and Use of Samarium Diiodide (SmI2) in Organic Synthesis: The Mechanistic Role of HMPA and Ni(II) Salts in the Samarium Barbier Reaction
Authors: Dhandapani V. Sadasivam, Kimberly A. Choquette, Robert A. Flowers II.
Institutions: Lehigh University .
Although initially considered an esoteric reagent, SmI2 has become a common tool for synthetic organic chemists. SmI2 is generated through the addition of molecular iodine to samarium metal in THF.1,2-3 It is a mild and selective single electron reductant and its versatility is a result of its ability to initiate a wide range of reductions including C-C bond-forming and cascade or sequential reactions. SmI2 can reduce a variety of functional groups including sulfoxides and sulfones, phosphine oxides, epoxides, alkyl and aryl halides, carbonyls, and conjugated double bonds.2-12 One of the fascinating features of SmI-2-mediated reactions is the ability to manipulate the outcome of reactions through the selective use of cosolvents or additives. In most instances, additives are essential in controlling the rate of reduction and the chemo- or stereoselectivity of reactions.13-14 Additives commonly utilized to fine tune the reactivity of SmI2 can be classified into three major groups: (1) Lewis bases (HMPA, other electron-donor ligands, chelating ethers, etc.), (2) proton sources (alcohols, water etc.), and (3) inorganic additives (Ni(acac)2, FeCl3, etc).3 Understanding the mechanism of SmI2 reactions and the role of the additives enables utilization of the full potential of the reagent in organic synthesis. The Sm-Barbier reaction is chosen to illustrate the synthetic importance and mechanistic role of two common additives: HMPA and Ni(II) in this reaction. The Sm-Barbier reaction is similar to the traditional Grignard reaction with the only difference being that the alkyl halide, carbonyl, and Sm reductant are mixed simultaneously in one pot.1,15 Examples of Sm-mediated Barbier reactions with a range of coupling partners have been reported,1,3,7,10,12 and have been utilized in key steps of the synthesis of large natural products.16,17 Previous studies on the effect of additives on SmI2 reactions have shown that HMPA enhances the reduction potential of SmI2 by coordinating to the samarium metal center, producing a more powerful,13-14,18 sterically encumbered reductant19-21 and in some cases playing an integral role in post electron-transfer steps facilitating subsequent bond-forming events.22 In the Sm-Barbier reaction, HMPA has been shown to additionally activate the alkyl halide by forming a complex in a pre-equilibrium step.23 Ni(II) salts are a catalytic additive used frequently in Sm-mediated transformations.24-27 Though critical for success, the mechanistic role of Ni(II) was not known in these reactions. Recently it has been shown that SmI2 reduces Ni(II) to Ni(0), and the reaction is then carried out through organometallic Ni(0) chemistry.28 These mechanistic studies highlight that although the same Barbier product is obtained, the use of different additives in the SmI2 reaction drastically alters the mechanistic pathway of the reaction. The protocol for running these SmI2-initiated reactions is described.
Chemistry, Issue 72, Organic Chemistry, Chemical Engineering, Biochemistry, Samarium diiodide, Sml2, Samarium-Barbier Reaction, HMPA, hexamethylphosphoramide, Ni(II), Nickel(II) acetylacetonate, nickel, samarium, iodine, additives, synthesis, catalyst, reaction, synthetic organic chemistry
Play Button
Biochemical and High Throughput Microscopic Assessment of Fat Mass in Caenorhabditis Elegans
Authors: Elizabeth C. Pino, Christopher M. Webster, Christopher E. Carr, Alexander A. Soukas.
Institutions: Massachusetts General Hospital and Harvard Medical School, Massachusetts Institute of Technology.
The nematode C. elegans has emerged as an important model for the study of conserved genetic pathways regulating fat metabolism as it relates to human obesity and its associated pathologies. Several previous methodologies developed for the visualization of C. elegans triglyceride-rich fat stores have proven to be erroneous, highlighting cellular compartments other than lipid droplets. Other methods require specialized equipment, are time-consuming, or yield inconsistent results. We introduce a rapid, reproducible, fixative-based Nile red staining method for the accurate and rapid detection of neutral lipid droplets in C. elegans. A short fixation step in 40% isopropanol makes animals completely permeable to Nile red, which is then used to stain animals. Spectral properties of this lipophilic dye allow it to strongly and selectively fluoresce in the yellow-green spectrum only when in a lipid-rich environment, but not in more polar environments. Thus, lipid droplets can be visualized on a fluorescent microscope equipped with simple GFP imaging capability after only a brief Nile red staining step in isopropanol. The speed, affordability, and reproducibility of this protocol make it ideally suited for high throughput screens. We also demonstrate a paired method for the biochemical determination of triglycerides and phospholipids using gas chromatography mass-spectrometry. This more rigorous protocol should be used as confirmation of results obtained from the Nile red microscopic lipid determination. We anticipate that these techniques will become new standards in the field of C. elegans metabolic research.
Genetics, Issue 73, Biochemistry, Cellular Biology, Molecular Biology, Developmental Biology, Physiology, Anatomy, Caenorhabditis elegans, Obesity, Energy Metabolism, Lipid Metabolism, C. elegans, fluorescent lipid staining, lipids, Nile red, fat, high throughput screening, obesity, gas chromatography, mass spectrometry, GC/MS, animal model
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Live Cell Imaging of Alphaherpes Virus Anterograde Transport and Spread
Authors: Matthew P. Taylor, Radomir Kratchmarov, Lynn W. Enquist.
Institutions: Montana State University, Princeton University.
Advances in live cell fluorescence microscopy techniques, as well as the construction of recombinant viral strains that express fluorescent fusion proteins have enabled real-time visualization of transport and spread of alphaherpes virus infection of neurons. The utility of novel fluorescent fusion proteins to viral membrane, tegument, and capsids, in conjunction with live cell imaging, identified viral particle assemblies undergoing transport within axons. Similar tools have been successfully employed for analyses of cell-cell spread of viral particles to quantify the number and diversity of virions transmitted between cells. Importantly, the techniques of live cell imaging of anterograde transport and spread produce a wealth of information including particle transport velocities, distributions of particles, and temporal analyses of protein localization. Alongside classical viral genetic techniques, these methodologies have provided critical insights into important mechanistic questions. In this article we describe in detail the imaging methods that were developed to answer basic questions of alphaherpes virus transport and spread.
Virology, Issue 78, Infection, Immunology, Medicine, Molecular Biology, Cellular Biology, Microbiology, Genetics, Microscopy, Fluorescence, Neurobiology, Herpes virus, fluorescent protein, epifluorescent microscopy, neuronal culture, axon, virion, video microscopy, virus, live cell, imaging
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Play Button
The Double-H Maze: A Robust Behavioral Test for Learning and Memory in Rodents
Authors: Robert D. Kirch, Richard C. Pinnell, Ulrich G. Hofmann, Jean-Christophe Cassel.
Institutions: University Hospital Freiburg, UMR 7364 Université de Strasbourg, CNRS, Neuropôle de Strasbourg.
Spatial cognition research in rodents typically employs the use of maze tasks, whose attributes vary from one maze to the next. These tasks vary by their behavioral flexibility and required memory duration, the number of goals and pathways, and also the overall task complexity. A confounding feature in many of these tasks is the lack of control over the strategy employed by the rodents to reach the goal, e.g., allocentric (declarative-like) or egocentric (procedural) based strategies. The double-H maze is a novel water-escape memory task that addresses this issue, by allowing the experimenter to direct the type of strategy learned during the training period. The double-H maze is a transparent device, which consists of a central alleyway with three arms protruding on both sides, along with an escape platform submerged at the extremity of one of these arms. Rats can be trained using an allocentric strategy by alternating the start position in the maze in an unpredictable manner (see protocol 1; §4.7), thus requiring them to learn the location of the platform based on the available allothetic cues. Alternatively, an egocentric learning strategy (protocol 2; §4.8) can be employed by releasing the rats from the same position during each trial, until they learn the procedural pattern required to reach the goal. This task has been proven to allow for the formation of stable memory traces. Memory can be probed following the training period in a misleading probe trial, in which the starting position for the rats alternates. Following an egocentric learning paradigm, rats typically resort to an allocentric-based strategy, but only when their initial view on the extra-maze cues differs markedly from their original position. This task is ideally suited to explore the effects of drugs/perturbations on allocentric/egocentric memory performance, as well as the interactions between these two memory systems.
Behavior, Issue 101, Double-H maze, spatial memory, procedural memory, consolidation, allocentric, egocentric, habits, rodents, video tracking system
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.