JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Using noun phrases for navigating biomedical literature on Pubmed: how many updates are we losing track of?
PUBLISHED: 07-01-2011
Author-supplied citations are a fraction of the related literature for a paper. The "related citations" on PubMed is typically dozens or hundreds of results long, and does not offer hints why these results are related. Using noun phrases derived from the sentences of the paper, we show it is possible to more transparently navigate to PubMed updates through search terms that can associate a paper with its citations. The algorithm to generate these search terms involved automatically extracting noun phrases from the paper using natural language processing tools, and ranking them by the number of occurrences in the paper compared to the number of occurrences on the web. We define search queries having at least one instance of overlap between the author-supplied citations of the paper and the top 20 search results as citation validated (CV). When the overlapping citations were written by same authors as the paper itself, we define it as CV-S and different authors is defined as CV-D. For a systematic sample of 883 papers on PubMed Central, at least one of the search terms for 86% of the papers is CV-D versus 65% for the top 20 PubMed "related citations." We hypothesize these quantities computed for the 20 million papers on PubMed to differ within 5% of these percentages. Averaged across all 883 papers, 5 search terms are CV-D, and 10 search terms are CV-S, and 6 unique citations validate these searches. Potentially related literature uncovered by citation-validated searches (either CV-S or CV-D) are on the order of ten per paper--many more if the remaining searches that are not citation-validated are taken into account. The significance and relationship of each search result to the paper can only be vetted and explained by a researcher with knowledge of or interest in that paper.
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Published: 01-10-2014
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
24 Related JoVE Articles!
Play Button
Sigma's Non-specific Protease Activity Assay - Casein as a Substrate
Authors: Carrie Cupp-Enyard.
Institutions: Sigma Aldrich.
Proteases break peptide bonds. In the lab, it is often necessary to measure and/or compare the activity of proteases. Sigma's non-specific protease activity assay may be used as a standardized procedure to determine the activity of proteases, which is what we do during our quality control procedures. In this assay, casein acts as a substrate. When the protease we are testing digests casein, the amino acid tyrosine is liberated along with other amino acids and peptide fragments. Folin and Ciocalteus Phenol, or Folin's reagent primarily reacts with free tyrosine to produce a blue colored chromophore, which is quantifiable and measured as an absorbance value on the spectrophotometer. The more tyrosine that is released from casein, the more the chromophores are generated and the stronger the activity of the protease. Absorbance values generated by the activity of the protease are compared to a standard curve, which is generated by reacting known quantities of tyrosine with the F-C reagent to correlate changes in absorbance with the amount of tyrosine in micromoles. From the standard curve the activity of protease samples can be determined in terms of Units, which is the amount in micromoles of tyrosine equivalents released from casein per minute. To view this article in Chinese, click here
biochemistry, Issue 19, protease, casein, quality control assay, folin and ciocalteu's reagent, folin's reagent, colorimetric detection, spectrophotometer, Sigma-Aldrich
Play Button
Manufacturing Devices and Instruments for Easier Rat Liver Transplantation
Authors: Graziano Oldani, Stephanie Lacotte, Lorenzo Orci, Philippe Morel, Gilles Mentha, Christian Toso.
Institutions: University of Geneva Hospitals, University of Pavia , University of Geneva, University of Geneva Hospitals.
Orthotopic rat liver transplantation is a popular model, which has been shown in a recent JoVE paper with the use of the "quick-linker" device. This technique allows for easier venous cuff-anatomoses after a reasonable learning curve. The device is composed of two handles, which are carved out from scalpel blades, one approximator, which is obtained by modifying Kocher's forceps, and cuffs designed from fine-bore polyethylene tubing. The whole process can be performed at a low-cost using common laboratory material. The present report provides a step-by-step protocol for the design of the required pieces and includes stencils.
Medicine, Issue 75, Biomedical Engineering, Bioengineering, Mechanical Engineering, Anatomy, Physiology, Surgery, Tissue Engineering, Liver Transplantation, Liver, transplantation, rat, quick-linker, orthotopic, graft, cuff, clinical techniques, animal model
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
Play Button
Voltage Biasing, Cyclic Voltammetry, & Electrical Impedance Spectroscopy for Neural Interfaces
Authors: Seth J. Wilks, Tom J. Richner, Sarah K. Brodnick, Daryl R. Kipke, Justin C. Williams, Kevin J. Otto.
Institutions: Purdue University, University of Wisconsin-Madison, University of Michigan , Purdue University.
Electrical impedance spectroscopy (EIS) and cyclic voltammetry (CV) measure properties of the electrode-tissue interface without additional invasive procedures, and can be used to monitor electrode performance over the long term. EIS measures electrical impedance at multiple frequencies, and increases in impedance indicate increased glial scar formation around the device, while cyclic voltammetry measures the charge carrying capacity of the electrode, and indicates how charge is transferred at different voltage levels. As implanted electrodes age, EIS and CV data change, and electrode sites that previously recorded spiking neurons often exhibit significantly lower efficacy for neural recording. The application of a brief voltage pulse to implanted electrode arrays, known as rejuvenation, can bring back spiking activity on otherwise silent electrode sites for a period of time. Rejuvenation alters EIS and CV, and can be monitored by these complementary methods. Typically, EIS is measured daily as an indication of the tissue response at the electrode site. If spikes are absent in a channel that previously had spikes, then CV is used to determine the charge carrying capacity of the electrode site, and rejuvenation can be applied to improve the interface efficacy. CV and EIS are then repeated to check the changes at the electrode-tissue interface, and neural recordings are collected. The overall goal of rejuvenation is to extend the functional lifetime of implanted arrays.
Neuroscience, Issue 60, neuroprosthesis, electrode-tissue interface, rejuvenation, neural engineering, neuroscience, neural implant, electrode, brain-computer interface, electrochemistry
Play Button
In Vitro Reconstitution of Light-harvesting Complexes of Plants and Green Algae
Authors: Alberto Natali, Laura M. Roy, Roberta Croce.
Institutions: VU University Amsterdam.
In plants and green algae, light is captured by the light-harvesting complexes (LHCs), a family of integral membrane proteins that coordinate chlorophylls and carotenoids. In vivo, these proteins are folded with pigments to form complexes which are inserted in the thylakoid membrane of the chloroplast. The high similarity in the chemical and physical properties of the members of the family, together with the fact that they can easily lose pigments during isolation, makes their purification in a native state challenging. An alternative approach to obtain homogeneous preparations of LHCs was developed by Plumley and Schmidt in 19871, who showed that it was possible to reconstitute these complexes in vitro starting from purified pigments and unfolded apoproteins, resulting in complexes with properties very similar to that of native complexes. This opened the way to the use of bacterial expressed recombinant proteins for in vitro reconstitution. The reconstitution method is powerful for various reasons: (1) pure preparations of individual complexes can be obtained, (2) pigment composition can be controlled to assess their contribution to structure and function, (3) recombinant proteins can be mutated to study the functional role of the individual residues (e.g., pigment binding sites) or protein domain (e.g., protein-protein interaction, folding). This method has been optimized in several laboratories and applied to most of the light-harvesting complexes. The protocol described here details the method of reconstituting light-harvesting complexes in vitro currently used in our laboratory, and examples describing applications of the method are provided.
Biochemistry, Issue 92, Reconstitution, Photosynthesis, Chlorophyll, Carotenoids, Light Harvesting Protein, Chlamydomonas reinhardtii, Arabidopsis thaliana
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Authors: Mikhail Kislin, Ekaterina Mugantseva, Dmitry Molotkov, Natalia Kulesskaya, Stanislav Khirug, Ilya Kirilkin, Evgeny Pryazhnikov, Julia Kolikova, Dmytro Toptunov, Mikhail Yuryev, Rashid Giniatullin, Vootele Voikar, Claudio Rivera, Heikki Rauvala, Leonard Khiroug.
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g., learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Play Button
Production of Tissue Microarrays, Immunohistochemistry Staining and Digitalization Within the Human Protein Atlas
Authors: Caroline Kampf, IngMarie Olsson, Urban Ryberg, Evelina Sjöstedt, Fredrik Pontén.
Institutions: Uppsala University .
The tissue microarray (TMA) technology provides the means for high-throughput analysis of multiple tissues and cells. The technique is used within the Human Protein Atlas project for global analysis of protein expression patterns in normal human tissues, cancer and cell lines. Here we present the assembly of 1 mm cores, retrieved from microscopically selected representative tissues, into a single recipient TMA block. The number and size of cores in a TMA block can be varied from approximately forty 2 mm cores to hundreds of 0.6 mm cores. The advantage of using TMA technology is that large amount of data can rapidly be obtained using a single immunostaining protocol to avoid experimental variability. Importantly, only limited amount of scarce tissue is needed, which allows for the analysis of large patient cohorts 1 2. Approximately 250 consecutive sections (4 μm thick) can be cut from a TMA block and used for immunohistochemical staining to determine specific protein expression patterns for 250 different antibodies. In the Human Protein Atlas project, antibodies are generated towards all human proteins and used to acquire corresponding protein profiles in both normal human tissues from 144 individuals and cancer tissues from 216 different patients, representing the 20 most common forms of human cancer. Immunohistochemically stained TMA sections on glass slides are scanned to create high-resolution images from which pathologists can interpret and annotate the outcome of immunohistochemistry. Images together with corresponding pathology-based annotation data are made publically available for the research community through the Human Protein Atlas portal ( (Figure 1) 3 4. The Human Protein Atlas provides a map showing the distribution and relative abundance of proteins in the human body. The current version contains over 11 million images with protein expression data for 12.238 unique proteins, corresponding to more than 61% of all proteins encoded by the human genome.
Genetics, Issue 63, Immunology, Molecular Biology, tissue microarray, immunohistochemistry, slide scanning, the Human Protein Atlas, protein profiles
Play Button
Acquiring Fluorescence Time-lapse Movies of Budding Yeast and Analyzing Single-cell Dynamics using GRAFTS
Authors: Christopher J. Zopf, Narendra Maheshri.
Institutions: Massachusetts Institute of Technology.
Fluorescence time-lapse microscopy has become a powerful tool in the study of many biological processes at the single-cell level. In particular, movies depicting the temporal dependence of gene expression provide insight into the dynamics of its regulation; however, there are many technical challenges to obtaining and analyzing fluorescence movies of single cells. We describe here a simple protocol using a commercially available microfluidic culture device to generate such data, and a MATLAB-based, graphical user interface (GUI) -based software package to quantify the fluorescence images. The software segments and tracks cells, enables the user to visually curate errors in the data, and automatically assigns lineage and division times. The GUI further analyzes the time series to produce whole cell traces as well as their first and second time derivatives. While the software was designed for S. cerevisiae, its modularity and versatility should allow it to serve as a platform for studying other cell types with few modifications.
Microbiology, Issue 77, Cellular Biology, Molecular Biology, Genetics, Biophysics, Saccharomyces cerevisiae, Microscopy, Fluorescence, Cell Biology, microscopy/fluorescence and time-lapse, budding yeast, gene expression dynamics, segmentation, lineage tracking, image tracking, software, yeast, cells, imaging
Play Button
Osteopathic Manipulative Treatment as a Useful Adjunctive Tool for Pneumonia
Authors: Sheldon Yao, John Hassani, Martin Gagne, Gebe George, Wolfgang Gilliar.
Institutions: New York Institute of Technology College of Osteopathic Medicine.
Pneumonia, the inflammatory state of lung tissue primarily due to microbial infection, claimed 52,306 lives in the United States in 20071 and resulted in the hospitalization of 1.1 million patients2. With an average length of in-patient hospital stay of five days2, pneumonia and influenza comprise significant financial burden costing the United States $40.2 billion in 20053. Under the current Infectious Disease Society of America/American Thoracic Society guidelines, standard-of-care recommendations include the rapid administration of an appropriate antibiotic regiment, fluid replacement, and ventilation (if necessary). Non-standard therapies include the use of corticosteroids and statins; however, these therapies lack conclusive supporting evidence4. (Figure 1) Osteopathic Manipulative Treatment (OMT) is a cost-effective adjunctive treatment of pneumonia that has been shown to reduce patients’ length of hospital stay, duration of intravenous antibiotics, and incidence of respiratory failure or death when compared to subjects who received conventional care alone5. The use of manual manipulation techniques for pneumonia was first recorded as early as the Spanish influenza pandemic of 1918, when patients treated with standard medical care had an estimated mortality rate of 33%, compared to a 10% mortality rate in patients treated by osteopathic physicians6. When applied to the management of pneumonia, manual manipulation techniques bolster lymphatic flow, respiratory function, and immunological defense by targeting anatomical structures involved in the these systems7,8, 9, 10. The objective of this review video-article is three-fold: a) summarize the findings of randomized controlled studies on the efficacy of OMT in adult patients with diagnosed pneumonia, b) demonstrate established protocols utilized by osteopathic physicians treating pneumonia, c) elucidate the physiological mechanisms behind manual manipulation of the respiratory and lymphatic systems. Specifically, we will discuss and demonstrate four routine techniques that address autonomics, lymph drainage, and rib cage mobility: 1) Rib Raising, 2) Thoracic Pump, 3) Doming of the Thoracic Diaphragm, and 4) Muscle Energy for Rib 1.5,11
Medicine, Issue 87, Pneumonia, osteopathic manipulative medicine (OMM) and techniques (OMT), lymphatic, rib raising, thoracic pump, muscle energy, doming diaphragm, alternative treatment
Play Button
High-throughput Analysis of Mammalian Olfactory Receptors: Measurement of Receptor Activation via Luciferase Activity
Authors: Casey Trimmer, Lindsey L. Snyder, Joel D. Mainland.
Institutions: Monell Chemical Senses Center.
Odorants create unique and overlapping patterns of olfactory receptor activation, allowing a family of approximately 1,000 murine and 400 human receptors to recognize thousands of odorants. Odorant ligands have been published for fewer than 6% of human receptors1-11. This lack of data is due in part to difficulties functionally expressing these receptors in heterologous systems. Here, we describe a method for expressing the majority of the olfactory receptor family in Hana3A cells, followed by high-throughput assessment of olfactory receptor activation using a luciferase reporter assay. This assay can be used to (1) screen panels of odorants against panels of olfactory receptors; (2) confirm odorant/receptor interaction via dose response curves; and (3) compare receptor activation levels among receptor variants. In our sample data, 328 olfactory receptors were screened against 26 odorants. Odorant/receptor pairs with varying response scores were selected and tested in dose response. These data indicate that a screen is an effective method to enrich for odorant/receptor pairs that will pass a dose response experiment, i.e. receptors that have a bona fide response to an odorant. Therefore, this high-throughput luciferase assay is an effective method to characterize olfactory receptors—an essential step toward a model of odor coding in the mammalian olfactory system.
Neuroscience, Issue 88, Firefly luciferase, Renilla Luciferase, Dual-Glo Luciferase Assay, olfaction, Olfactory receptor, Odorant, GPCR, High-throughput
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Double Whole Mount in situ Hybridization of Early Chick Embryos
Authors: Delphine Psychoyos, Richard Finnell.
Institutions: Institute of Biosciences and Technology - Texas A&M Health Science Center , Texas A&M University (TAMU).
The chick embryo is a valuable tool in the study of early embryonic development. Its transparency, accessibility and ease of manipulation, make it an ideal tool for studying gene expression in brain, neural tube, somite and heart primordia formation. This video demonstrates the different steps in 2-color whole mount in situ hybridization; First, the embryo is dissected from the egg and fixed in paraformaldehyde. Second, the embryo is processed for prehybridization. The embryo is then hybridized with two different probes, one coupled to DIG, and one coupled to FITC. Following overnight hybridization, the embryo is incubated with DIG coupled antibody. Color reaction for DIG substrate is performed, and the region of interest appears blue. The embryo is then incubated with FITC coupled antibody. The embryo is processed for color reaction with FITC, and the region of interest appears red. Finally, the embryo is fixed and processed for phtograph and sectioning. A troubleshooting guide is also presented.
Developmental Biology, Issue 20, whole mount in situ hybridization, gene expression, chick embryo
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Barnes Maze Testing Strategies with Small and Large Rodent Models
Authors: Cheryl S. Rosenfeld, Sherry A. Ferguson.
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Manual Isolation of Adipose-derived Stem Cells from Human Lipoaspirates
Authors: Min Zhu, Sepideh Heydarkhan-Hagvall, Marc Hedrick, Prosper Benhaim, Patricia Zuk.
Institutions: Cytori Therapeutics Inc, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA.
In 2001, researchers at the University of California, Los Angeles, described the isolation of a new population of adult stem cells from liposuctioned adipose tissue that they initially termed Processed Lipoaspirate Cells or PLA cells. Since then, these stem cells have been renamed as Adipose-derived Stem Cells or ASCs and have gone on to become one of the most popular adult stem cells populations in the fields of stem cell research and regenerative medicine. Thousands of articles now describe the use of ASCs in a variety of regenerative animal models, including bone regeneration, peripheral nerve repair and cardiovascular engineering. Recent articles have begun to describe the myriad of uses for ASCs in the clinic. The protocol shown in this article outlines the basic procedure for manually and enzymatically isolating ASCs from large amounts of lipoaspirates obtained from cosmetic procedures. This protocol can easily be scaled up or down to accommodate the volume of lipoaspirate and can be adapted to isolate ASCs from fat tissue obtained through abdominoplasties and other similar procedures.
Cellular Biology, Issue 79, Adipose Tissue, Stem Cells, Humans, Cell Biology, biology (general), enzymatic digestion, collagenase, cell isolation, Stromal Vascular Fraction (SVF), Adipose-derived Stem Cells, ASCs, lipoaspirate, liposuction
Play Button
Improving IV Insulin Administration in a Community Hospital
Authors: Michael C. Magee.
Institutions: Wyoming Medical Center.
Diabetes mellitus is a major independent risk factor for increased morbidity and mortality in the hospitalized patient, and elevated blood glucose concentrations, even in non-diabetic patients, predicts poor outcomes.1-4 The 2008 consensus statement by the American Association of Clinical Endocrinologists (AACE) and the American Diabetes Association (ADA) states that "hyperglycemia in hospitalized patients, irrespective of its cause, is unequivocally associated with adverse outcomes."5 It is important to recognize that hyperglycemia occurs in patients with known or undiagnosed diabetes as well as during acute illness in those with previously normal glucose tolerance. The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study involved over six thousand adult intensive care unit (ICU) patients who were randomized to intensive glucose control or conventional glucose control.6 Surprisingly, this trial found that intensive glucose control increased the risk of mortality by 14% (odds ratio, 1.14; p=0.02). In addition, there was an increased prevalence of severe hypoglycemia in the intensive control group compared with the conventional control group (6.8% vs. 0.5%, respectively; p<0.001). From this pivotal trial and two others,7,8 Wyoming Medical Center (WMC) realized the importance of controlling hyperglycemia in the hospitalized patient while avoiding the negative impact of resultant hypoglycemia. Despite multiple revisions of an IV insulin paper protocol, analysis of data from usage of the paper protocol at WMC shows that in terms of achieving normoglycemia while minimizing hypoglycemia, results were suboptimal. Therefore, through a systematical implementation plan, monitoring of patient blood glucose levels was switched from using a paper IV insulin protocol to a computerized glucose management system. By comparing blood glucose levels using the paper protocol to that of the computerized system, it was determined, that overall, the computerized glucose management system resulted in more rapid and tighter glucose control than the traditional paper protocol. Specifically, a substantial increase in the time spent within the target blood glucose concentration range, as well as a decrease in the prevalence of severe hypoglycemia (BG < 40 mg/dL), clinical hypoglycemia (BG < 70 mg/dL), and hyperglycemia (BG > 180 mg/dL), was witnessed in the first five months after implementation of the computerized glucose management system. The computerized system achieved target concentrations in greater than 75% of all readings while minimizing the risk of hypoglycemia. The prevalence of hypoglycemia (BG < 70 mg/dL) with the use of the computer glucose management system was well under 1%.
Medicine, Issue 64, Physiology, Computerized glucose management, Endotool, hypoglycemia, hyperglycemia, diabetes, IV insulin, paper protocol, glucose control
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.