JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Figure text extraction in biomedical literature.
PLoS ONE
PUBLISHED: 01-13-2011
Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org) to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures.
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Published: 01-10-2014
ABSTRACT
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
26 Related JoVE Articles!
Play Button
Sigma's Non-specific Protease Activity Assay - Casein as a Substrate
Authors: Carrie Cupp-Enyard.
Institutions: Sigma Aldrich.
Proteases break peptide bonds. In the lab, it is often necessary to measure and/or compare the activity of proteases. Sigma's non-specific protease activity assay may be used as a standardized procedure to determine the activity of proteases, which is what we do during our quality control procedures. In this assay, casein acts as a substrate. When the protease we are testing digests casein, the amino acid tyrosine is liberated along with other amino acids and peptide fragments. Folin and Ciocalteus Phenol, or Folin's reagent primarily reacts with free tyrosine to produce a blue colored chromophore, which is quantifiable and measured as an absorbance value on the spectrophotometer. The more tyrosine that is released from casein, the more the chromophores are generated and the stronger the activity of the protease. Absorbance values generated by the activity of the protease are compared to a standard curve, which is generated by reacting known quantities of tyrosine with the F-C reagent to correlate changes in absorbance with the amount of tyrosine in micromoles. From the standard curve the activity of protease samples can be determined in terms of Units, which is the amount in micromoles of tyrosine equivalents released from casein per minute. To view this article in Chinese, click here
biochemistry, Issue 19, protease, casein, quality control assay, folin and ciocalteu's reagent, folin's reagent, colorimetric detection, spectrophotometer, Sigma-Aldrich
899
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Analysis of Fatty Acid Content and Composition in Microalgae
Authors: Guido Breuer, Wendy A. C. Evers, Jeroen H. de Vree, Dorinde M. M. Kleinegris, Dirk E. Martens, René H. Wijffels, Packo P. Lamers.
Institutions: Wageningen University and Research Center, Wageningen University and Research Center, Wageningen University and Research Center.
A method to determine the content and composition of total fatty acids present in microalgae is described. Fatty acids are a major constituent of microalgal biomass. These fatty acids can be present in different acyl-lipid classes. Especially the fatty acids present in triacylglycerol (TAG) are of commercial interest, because they can be used for production of transportation fuels, bulk chemicals, nutraceuticals (ω-3 fatty acids), and food commodities. To develop commercial applications, reliable analytical methods for quantification of fatty acid content and composition are needed. Microalgae are single cells surrounded by a rigid cell wall. A fatty acid analysis method should provide sufficient cell disruption to liberate all acyl lipids and the extraction procedure used should be able to extract all acyl lipid classes. With the method presented here all fatty acids present in microalgae can be accurately and reproducibly identified and quantified using small amounts of sample (5 mg) independent of their chain length, degree of unsaturation, or the lipid class they are part of. This method does not provide information about the relative abundance of different lipid classes, but can be extended to separate lipid classes from each other. The method is based on a sequence of mechanical cell disruption, solvent based lipid extraction, transesterification of fatty acids to fatty acid methyl esters (FAMEs), and quantification and identification of FAMEs using gas chromatography (GC-FID). A TAG internal standard (tripentadecanoin) is added prior to the analytical procedure to correct for losses during extraction and incomplete transesterification.
Environmental Sciences, Issue 80, chemical analysis techniques, Microalgae, fatty acid, triacylglycerol, lipid, gas chromatography, cell disruption
50628
Play Button
High-throughput, Automated Extraction of DNA and RNA from Clinical Samples using TruTip Technology on Common Liquid Handling Robots
Authors: Rebecca C. Holmberg, Alissa Gindlesperger, Tinsley Stokes, Dane Brady, Nitu Thakore, Philip Belgrader, Christopher G. Cooney, Darrell P. Chandler.
Institutions: Akonni Biosystems, Inc., Akonni Biosystems, Inc., Akonni Biosystems, Inc., Akonni Biosystems, Inc..
TruTip is a simple nucleic acid extraction technology whereby a porous, monolithic binding matrix is inserted into a pipette tip. The geometry of the monolith can be adapted for specific pipette tips ranging in volume from 1.0 to 5.0 ml. The large porosity of the monolith enables viscous or complex samples to readily pass through it with minimal fluidic backpressure. Bi-directional flow maximizes residence time between the monolith and sample, and enables large sample volumes to be processed within a single TruTip. The fundamental steps, irrespective of sample volume or TruTip geometry, include cell lysis, nucleic acid binding to the inner pores of the TruTip monolith, washing away unbound sample components and lysis buffers, and eluting purified and concentrated nucleic acids into an appropriate buffer. The attributes and adaptability of TruTip are demonstrated in three automated clinical sample processing protocols using an Eppendorf epMotion 5070, Hamilton STAR and STARplus liquid handling robots, including RNA isolation from nasopharyngeal aspirate, genomic DNA isolation from whole blood, and fetal DNA extraction and enrichment from large volumes of maternal plasma (respectively).
Genetics, Issue 76, Bioengineering, Biomedical Engineering, Molecular Biology, Automation, Laboratory, Clinical Laboratory Techniques, Molecular Diagnostic Techniques, Analytic Sample Preparation Methods, Clinical Laboratory Techniques, Molecular Diagnostic Techniques, Genetic Techniques, Molecular Diagnostic Techniques, Automation, Laboratory, Chemistry, Clinical, DNA/RNA extraction, automation, nucleic acid isolation, sample preparation, nasopharyngeal aspirate, blood, plasma, high-throughput, sequencing
50356
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Multi-step Preparation Technique to Recover Multiple Metabolite Compound Classes for In-depth and Informative Metabolomic Analysis
Authors: Charmion Cruickshank-Quinn, Kevin D. Quinn, Roger Powell, Yanhui Yang, Michael Armstrong, Spencer Mahaffey, Richard Reisdorph, Nichole Reisdorph.
Institutions: National Jewish Health, University of Colorado Denver.
Metabolomics is an emerging field which enables profiling of samples from living organisms in order to obtain insight into biological processes. A vital aspect of metabolomics is sample preparation whereby inconsistent techniques generate unreliable results. This technique encompasses protein precipitation, liquid-liquid extraction, and solid-phase extraction as a means of fractionating metabolites into four distinct classes. Improved enrichment of low abundance molecules with a resulting increase in sensitivity is obtained, and ultimately results in more confident identification of molecules. This technique has been applied to plasma, bronchoalveolar lavage fluid, and cerebrospinal fluid samples with volumes as low as 50 µl.  Samples can be used for multiple downstream applications; for example, the pellet resulting from protein precipitation can be stored for later analysis. The supernatant from that step undergoes liquid-liquid extraction using water and strong organic solvent to separate the hydrophilic and hydrophobic compounds. Once fractionated, the hydrophilic layer can be processed for later analysis or discarded if not needed. The hydrophobic fraction is further treated with a series of solvents during three solid-phase extraction steps to separate it into fatty acids, neutral lipids, and phospholipids. This allows the technician the flexibility to choose which class of compounds is preferred for analysis. It also aids in more reliable metabolite identification since some knowledge of chemical class exists.
Bioengineering, Issue 89, plasma, chemistry techniques, analytical, solid phase extraction, mass spectrometry, metabolomics, fluids and secretions, profiling, small molecules, lipids, liquid chromatography, liquid-liquid extraction, cerebrospinal fluid, bronchoalveolar lavage fluid
51670
Play Button
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Authors: Justen Manasa, Siva Danaviah, Sureshnee Pillay, Prevashinee Padayachee, Hloniphile Mthiyane, Charity Mkhize, Richard John Lessells, Christopher Seebregts, Tobias F. Rinke de Wit, Johannes Viljoen, David Katzenstein, Tulio De Oliveira.
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
51242
Play Button
Insertion of Flexible Neural Probes Using Rigid Stiffeners Attached with Biodissolvable Adhesive
Authors: Sarah H. Felix, Kedar G. Shah, Vanessa M. Tolosa, Heeral J. Sheth, Angela C. Tooker, Terri L. Delima, Shantanu P. Jadhav, Loren M. Frank, Satinderpall S. Pannu.
Institutions: Lawrence Livermore National Laboratory, University of California, San Francisco.
Microelectrode arrays for neural interface devices that are made of biocompatible thin-film polymer are expected to have extended functional lifetime because the flexible material may minimize adverse tissue response caused by micromotion. However, their flexibility prevents them from being accurately inserted into neural tissue. This article demonstrates a method to temporarily attach a flexible microelectrode probe to a rigid stiffener using biodissolvable polyethylene glycol (PEG) to facilitate precise, surgical insertion of the probe. A unique stiffener design allows for uniform distribution of the PEG adhesive along the length of the probe. Flip-chip bonding, a common tool used in microelectronics packaging, enables accurate and repeatable alignment and attachment of the probe to the stiffener. The probe and stiffener are surgically implanted together, then the PEG is allowed to dissolve so that the stiffener can be extracted leaving the probe in place. Finally, an in vitro test method is used to evaluate stiffener extraction in an agarose gel model of brain tissue. This approach to implantation has proven particularly advantageous for longer flexible probes (>3 mm). It also provides a feasible method to implant dual-sided flexible probes. To date, the technique has been used to obtain various in vivo recording data from the rat cortex.
Bioengineering, Issue 79, Nervous System Diseases, Surgical Procedures, Operative, Investigative Techniques, Nonmetallic Materials, Engineering (General), neural interfaces, polymer neural probes, surgical insertion, polyethylene glycol, microelectrode arrays, chronic implantation
50609
Play Button
Cerebrospinal Fluid MicroRNA Profiling Using Quantitative Real Time PCR
Authors: Marco Pacifici, Serena Delbue, Ferdous Kadri, Francesca Peruzzi.
Institutions: LSU Health Sciences Center, University of Milan.
MicroRNAs (miRNAs) constitute a potent layer of gene regulation by guiding RISC to target sites located on mRNAs and, consequently, by modulating their translational repression. Changes in miRNA expression have been shown to be involved in the development of all major complex diseases. Furthermore, recent findings showed that miRNAs can be secreted to the extracellular environment and enter the bloodstream and other body fluids where they can circulate with high stability. The function of such circulating miRNAs remains largely elusive, but systematic high throughput approaches, such as miRNA profiling arrays, have lead to the identification of miRNA signatures in several pathological conditions, including neurodegenerative disorders and several types of cancers. In this context, the identification of miRNA expression profile in the cerebrospinal fluid, as reported in our recent study, makes miRNAs attractive candidates for biomarker analysis. There are several tools available for profiling microRNAs, such as microarrays, quantitative real-time PCR (qPCR), and deep sequencing. Here, we describe a sensitive method to profile microRNAs in cerebrospinal fluids by quantitative real-time PCR. We used the Exiqon microRNA ready-to-use PCR human panels I and II V2.R, which allows detection of 742 unique human microRNAs. We performed the arrays in triplicate runs and we processed and analyzed data using the GenEx Professional 5 software. Using this protocol, we have successfully profiled microRNAs in various types of cell lines and primary cells, CSF, plasma, and formalin-fixed paraffin-embedded tissues.
Medicine, Issue 83, microRNAs, biomarkers, miRNA profiling, qPCR, cerebrospinal fluid, RNA, DNA
51172
Play Button
Heterotopic Heart Transplantation in Mice
Authors: Fengchun Liu, Sang Mo Kang.
Institutions: University of California, San Francisco - UCSF.
The mouse heterotopic heart transplantation has been used widely since it was introduced by Drs. Corry and Russell in 1973. It is particularly valuable for studying rejection and immune response now that newer transgenic and gene knockout mice are available, and a large number of immunologic reagents have been developed. The heart transplant model is less stringent than the skin transplant models, although technically more challenging. We have developed a modified technique and have completed over 1000 successful cases of heterotopic heart transplantation in mice. When making anastomosis of the ascending aorta and abdominal aorta, two stay sutures are placed at the proximal and distal apexes of recipient abdominal aorta with the donor s ascending aorta, then using 11-0 suture for anastomosis on both side of aorta with continuing sutures. The stay sutures make the anastomosis easier and 11-0 is an ideal suture size to avoid bleeding and thrombosis. When making anastomosis of pulmonary artery and inferior vena cava, two stay sutures are made at the proximal apex and distal apex of the recipient s inferior vena cava with the donor s pulmonary artery. The left wall of the inferior vena cava and donor s pulmonary artery is closed with continuing sutures in the inside of the inferior vena cava after, one knot with the proximal apex stay suture the right wall of the inferior vena cava and the donor s pulmonary artery are closed with continuing sutures outside the inferior vena cave with 10-0 sutures. This method is easier to perform because anastomosis is made just on the one side of the inferior vena cava and 10-0 sutures is the right size to avoid bleeding and thrombosis. In this article, we provide details of the technique to supplement the video.
Developmental Biology, Issue 6, Microsurgical Techniques, Heart Transplant, Allograft Rejection Model
238
Play Button
Contextual and Cued Fear Conditioning Test Using a Video Analyzing System in Mice
Authors: Hirotaka Shoji, Keizo Takao, Satoko Hattori, Tsuyoshi Miyakawa.
Institutions: Fujita Health University, Core Research for Evolutionary Science and Technology (CREST), National Institutes of Natural Sciences.
The contextual and cued fear conditioning test is one of the behavioral tests that assesses the ability of mice to learn and remember an association between environmental cues and aversive experiences. In this test, mice are placed into a conditioning chamber and are given parings of a conditioned stimulus (an auditory cue) and an aversive unconditioned stimulus (an electric footshock). After a delay time, the mice are exposed to the same conditioning chamber and a differently shaped chamber with presentation of the auditory cue. Freezing behavior during the test is measured as an index of fear memory. To analyze the behavior automatically, we have developed a video analyzing system using the ImageFZ application software program, which is available as a free download at http://www.mouse-phenotype.org/. Here, to show the details of our protocol, we demonstrate our procedure for the contextual and cued fear conditioning test in C57BL/6J mice using the ImageFZ system. In addition, we validated our protocol and the video analyzing system performance by comparing freezing time measured by the ImageFZ system or a photobeam-based computer measurement system with that scored by a human observer. As shown in our representative results, the data obtained by ImageFZ were similar to those analyzed by a human observer, indicating that the behavioral analysis using the ImageFZ system is highly reliable. The present movie article provides detailed information regarding the test procedures and will promote understanding of the experimental situation.
Behavior, Issue 85, Fear, Learning, Memory, ImageFZ program, Mouse, contextual fear, cued fear
50871
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Phase Contrast and Differential Interference Contrast (DIC) Microscopy
Authors: Victoria Centonze Frohlich.
Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA).
Phase-contrast microscopy is often used to produce contrast for transparent, non light-absorbing, biological specimens. The technique was discovered by Zernike, in 1942, who received the Nobel prize for his achievement. DIC microscopy, introduced in the late 1960s, has been popular in biomedical research because it highlights edges of specimen structural detail, provides high-resolution optical sections of thick specimens including tissue cells, eggs, and embryos and does not suffer from the phase halos typical of phase-contrast images. This protocol highlights the principles and practical applications of these microscopy techniques.
Basic protocols, Issue 18, Current Protocols Wiley, Microscopy, Phase Contrast, Difference Interference Contrast
844
Play Button
Microfluidic Applications for Disposable Diagnostics
Authors: Catherine Klapperich.
Institutions: Boston University.
In this interview, Dr. Klapperich discusses the fabrication of thermoplastic microfluidic devices and their application for development of new diagnostics.
Cellular Biology, Issue 12, bioengineering, diagnostics, microfluidics, solid phase, purification
665
Play Button
BioMEMS: Forging New Collaborations Between Biologists and Engineers
Authors: Noo Li Jeon.
Institutions: University of California, Irvine (UCI).
This video describes the fabrication and use of a microfluidic device to culture central nervous system (CNS) neurons. This device is compatible with live-cell optical microscopy (DIC and phase contrast), as well as confocal and two photon microscopy approaches. This method uses precision-molded polymer parts to create miniature multi-compartment cell culture with fluidic isolation. The compartments are made of tiny channels with dimensions that are large enough to culture neurons in well-controlled fluidic microenvironments. Neurons can be cultured for 2-3 weeks within the device, after which they can be fixed and stained for immunocytochemistry. Axonal and somal compartments can be maintained fluidically isolated from each other by using a small hydrostatic pressure difference; this feature can be used to localize soluble insults to one compartment for up to 20 h after each medium change. Fluidic isolation enables collection of pure axonal fraction and biochemical analysis by PCR. The microfluidic device provides a highly adaptable platform for neuroscience research and may find applications in modeling CNS injury and neurodegeneration.
Neuroscience, Issue 9, Microfluidics, Bioengineering, Neuron
411
Play Button
Interview: Protein Folding and Studies of Neurodegenerative Diseases
Authors: Susan Lindquist.
Institutions: MIT - Massachusetts Institute of Technology.
In this interview, Dr. Lindquist describes relationships between protein folding, prion diseases and neurodegenerative disorders. The problem of the protein folding is at the core of the modern biology. In addition to their traditional biochemical functions, proteins can mediate transfer of biological information and therefore can be considered a genetic material. This recently discovered function of proteins has important implications for studies of human disorders. Dr. Lindquist also describes current experimental approaches to investigate the mechanism of neurodegenerative diseases based on genetic studies in model organisms.
Neuroscience, issue 17, protein folding, brain, neuron, prion, neurodegenerative disease, yeast, screen, Translational Research
786
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
BioMEMS and Cellular Biology: Perspectives and Applications
Authors: Albert Folch.
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
300
Play Button
Experimental Approaches to Tissue Engineering
Authors: Ali Khademhosseini.
Institutions: Brigham and Women's Hospital.
Issue 7, Cell Biology, tissue engineering, microfluidics, stem cells
272
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.