JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
A systematic comparison of supervised classifiers.
PUBLISHED: 01-01-2014
Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM). In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
24 Related JoVE Articles!
Play Button
In Situ Neutron Powder Diffraction Using Custom-made Lithium-ion Batteries
Authors: William R. Brant, Siegbert Schmid, Guodong Du, Helen E. A. Brand, Wei Kong Pang, Vanessa K. Peterson, Zaiping Guo, Neeraj Sharma.
Institutions: University of Sydney, University of Wollongong, Australian Synchrotron, Australian Nuclear Science and Technology Organisation, University of Wollongong, University of New South Wales.
Li-ion batteries are widely used in portable electronic devices and are considered as promising candidates for higher-energy applications such as electric vehicles.1,2 However, many challenges, such as energy density and battery lifetimes, need to be overcome before this particular battery technology can be widely implemented in such applications.3 This research is challenging, and we outline a method to address these challenges using in situ NPD to probe the crystal structure of electrodes undergoing electrochemical cycling (charge/discharge) in a battery. NPD data help determine the underlying structural mechanism responsible for a range of electrode properties, and this information can direct the development of better electrodes and batteries. We briefly review six types of battery designs custom-made for NPD experiments and detail the method to construct the ‘roll-over’ cell that we have successfully used on the high-intensity NPD instrument, WOMBAT, at the Australian Nuclear Science and Technology Organisation (ANSTO). The design considerations and materials used for cell construction are discussed in conjunction with aspects of the actual in situ NPD experiment and initial directions are presented on how to analyze such complex in situ data.
Physics, Issue 93, In operando, structure-property relationships, electrochemical cycling, electrochemical cells, crystallography, battery performance
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Substernal Thyroid Biopsy Using Endobronchial Ultrasound-guided Transbronchial Needle Aspiration
Authors: Abhishek Kumar, Arjun Mohan, Samjot S. Dhillon, Kassem Harris.
Institutions: State University of New York, Buffalo, Roswell Park Cancer Institute, State University of New York, Buffalo.
Substernal thyroid goiter (STG) represents about 5.8% of all mediastinal lesions1. There is a wide variation in the published incidence rates due to the lack of a standardized definition for STG. Biopsy is often required to differentiate benign from malignant lesions. Unlike cervical thyroid, the overlying sternum precludes ultrasound-guided percutaneous fine needle aspiration of STG. Consequently, surgical mediastinoscopy is performed in the majority of cases, causing significant procedure related morbidity and cost to healthcare. Endobronchial Ultrasound-guided Transbronchial Needle Aspiration (EBUS-TBNA) is a frequently used procedure for diagnosis and staging of non-small cell lung cancer (NSCLC). Minimally invasive needle biopsy for lesions adjacent to the airways can be performed under real-time ultrasound guidance using EBUS. Its safety and efficacy is well established with over 90% sensitivity and specificity. The ability to perform EBUS as an outpatient procedure with same-day discharges offers distinct morbidity and financial advantages over surgery. As physicians performing EBUS gained procedural expertise, they have attempted to diversify its role in the diagnosis of non-lymph node thoracic pathologies. We propose here a role for EBUS-TBNA in the diagnosis of substernal thyroid lesions, along with a step-by-step protocol for the procedure.
Medicine, Issue 93, substernal thyroid, retrosternal thyroid, intra-thoracic thyroid, goiter, endobronchial ultrasound, EBUS, transbronchial needle aspiration, TBNA, biopsy, needle biopsy
Play Button
Quantification of Orofacial Phenotypes in Xenopus
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Institutions: Virginia Commonwealth University.
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
Developmental Biology, Issue 93, Orofacial quantification, geometric morphometrics, Xenopus, orofacial development, orofacial defects, shape changes, facial dimensions
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Knowing What Counts: Unbiased Stereology in the Non-human Primate Brain
Authors: Mark Burke, Shahin Zangenehpour, Peter R. Mouton, Maurice Ptito.
Institutions: University of Montreal, University of Montreal, Stereology Resource Center.
The non-human primate is an important translational species for understanding the normal function and disease processes of the human brain. Unbiased stereology, the method accepted as state-of-the-art for quantification of biological objects in tissue sections2, generates reliable structural data for biological features in the mammalian brain3. The key components of the approach are unbiased (systematic-random) sampling of anatomically defined structures (reference spaces), combined with quantification of cell numbers and size, fiber and capillary lengths, surface areas, regional volumes and spatial distributions of biological objects within the reference space4. Among the advantages of these stereological approaches over previous methods is the avoidance of all known sources of systematic (non-random) error arising from faulty assumptions and non-verifiable models. This study documents a biological application of computerized stereology to estimate the total neuronal population in the frontal cortex of the vervet monkey brain (Chlorocebus aethiops sabeus), with assistance from two commercially available stereology programs, BioQuant Life Sciences and Stereologer (Figure 1). In addition to contrast and comparison of results from both the BioQuant and Stereologer systems, this study provides a detailed protocol for the Stereologer system.
Neuroscience, Issue 27, Stereology, brain bank, systematic sampling, non-human primate, cryostat, antigen preserve
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Movement Retraining using Real-time Feedback of Performance
Authors: Michael Anthony Hunt.
Institutions: University of British Columbia .
Any modification of movement - especially movement patterns that have been honed over a number of years - requires re-organization of the neuromuscular patterns responsible for governing the movement performance. This motor learning can be enhanced through a number of methods that are utilized in research and clinical settings alike. In general, verbal feedback of performance in real-time or knowledge of results following movement is commonly used clinically as a preliminary means of instilling motor learning. Depending on patient preference and learning style, visual feedback (e.g. through use of a mirror or different types of video) or proprioceptive guidance utilizing therapist touch, are used to supplement verbal instructions from the therapist. Indeed, a combination of these forms of feedback is commonplace in the clinical setting to facilitate motor learning and optimize outcomes. Laboratory-based, quantitative motion analysis has been a mainstay in research settings to provide accurate and objective analysis of a variety of movements in healthy and injured populations. While the actual mechanisms of capturing the movements may differ, all current motion analysis systems rely on the ability to track the movement of body segments and joints and to use established equations of motion to quantify key movement patterns. Due to limitations in acquisition and processing speed, analysis and description of the movements has traditionally occurred offline after completion of a given testing session. This paper will highlight a new supplement to standard motion analysis techniques that relies on the near instantaneous assessment and quantification of movement patterns and the display of specific movement characteristics to the patient during a movement analysis session. As a result, this novel technique can provide a new method of feedback delivery that has advantages over currently used feedback methods.
Medicine, Issue 71, Biophysics, Anatomy, Physiology, Physics, Biomedical Engineering, Behavior, Psychology, Kinesiology, Physical Therapy, Musculoskeletal System, Biofeedback, biomechanics, gait, movement, walking, rehabilitation, clinical, training
Play Button
Technique and Considerations in the Use of 4x1 Ring High-definition Transcranial Direct Current Stimulation (HD-tDCS)
Authors: Mauricio F. Villamar, Magdalena Sarah Volz, Marom Bikson, Abhishek Datta, Alexandre F. DaSilva, Felipe Fregni.
Institutions: Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Pontifical Catholic University of Ecuador, Charité University Medicine Berlin, The City College of The City University of New York, University of Michigan.
High-definition transcranial direct current stimulation (HD-tDCS) has recently been developed as a noninvasive brain stimulation approach that increases the accuracy of current delivery to the brain by using arrays of smaller "high-definition" electrodes, instead of the larger pad-electrodes of conventional tDCS. Targeting is achieved by energizing electrodes placed in predetermined configurations. One of these is the 4x1-ring configuration. In this approach, a center ring electrode (anode or cathode) overlying the target cortical region is surrounded by four return electrodes, which help circumscribe the area of stimulation. Delivery of 4x1-ring HD-tDCS is capable of inducing significant neurophysiological and clinical effects in both healthy subjects and patients. Furthermore, its tolerability is supported by studies using intensities as high as 2.0 milliamperes for up to twenty minutes. Even though 4x1 HD-tDCS is simple to perform, correct electrode positioning is important in order to accurately stimulate target cortical regions and exert its neuromodulatory effects. The use of electrodes and hardware that have specifically been tested for HD-tDCS is critical for safety and tolerability. Given that most published studies on 4x1 HD-tDCS have targeted the primary motor cortex (M1), particularly for pain-related outcomes, the purpose of this article is to systematically describe its use for M1 stimulation, as well as the considerations to be taken for safe and effective stimulation. However, the methods outlined here can be adapted for other HD-tDCS configurations and cortical targets.
Medicine, Issue 77, Neurobiology, Neuroscience, Physiology, Anatomy, Biomedical Engineering, Biophysics, Neurophysiology, Nervous System Diseases, Diagnosis, Therapeutics, Anesthesia and Analgesia, Investigative Techniques, Equipment and Supplies, Mental Disorders, Transcranial direct current stimulation, tDCS, High-definition transcranial direct current stimulation, HD-tDCS, Electrical brain stimulation, Transcranial electrical stimulation (tES), Noninvasive Brain Stimulation, Neuromodulation, non-invasive, brain, stimulation, clinical techniques
Play Button
In Vivo Modeling of the Morbid Human Genome using Danio rerio
Authors: Adrienne R. Niederriter, Erica E. Davis, Christelle Golzio, Edwin C. Oh, I-Chun Tsai, Nicholas Katsanis.
Institutions: Duke University Medical Center, Duke University, Duke University Medical Center.
Here, we present methods for the development of assays to query potentially clinically significant nonsynonymous changes using in vivo complementation in zebrafish. Zebrafish (Danio rerio) are a useful animal system due to their experimental tractability; embryos are transparent to enable facile viewing, undergo rapid development ex vivo, and can be genetically manipulated.1 These aspects have allowed for significant advances in the analysis of embryogenesis, molecular processes, and morphogenetic signaling. Taken together, the advantages of this vertebrate model make zebrafish highly amenable to modeling the developmental defects in pediatric disease, and in some cases, adult-onset disorders. Because the zebrafish genome is highly conserved with that of humans (~70% orthologous), it is possible to recapitulate human disease states in zebrafish. This is accomplished either through the injection of mutant human mRNA to induce dominant negative or gain of function alleles, or utilization of morpholino (MO) antisense oligonucleotides to suppress genes to mimic loss of function variants. Through complementation of MO-induced phenotypes with capped human mRNA, our approach enables the interpretation of the deleterious effect of mutations on human protein sequence based on the ability of mutant mRNA to rescue a measurable, physiologically relevant phenotype. Modeling of the human disease alleles occurs through microinjection of zebrafish embryos with MO and/or human mRNA at the 1-4 cell stage, and phenotyping up to seven days post fertilization (dpf). This general strategy can be extended to a wide range of disease phenotypes, as demonstrated in the following protocol. We present our established models for morphogenetic signaling, craniofacial, cardiac, vascular integrity, renal function, and skeletal muscle disorder phenotypes, as well as others.
Molecular Biology, Issue 78, Genetics, Biomedical Engineering, Medicine, Developmental Biology, Biochemistry, Anatomy, Physiology, Bioengineering, Genomics, Medical, zebrafish, in vivo, morpholino, human disease modeling, transcription, PCR, mRNA, DNA, Danio rerio, animal model
Play Button
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Authors: Tadd T. Truscott, Jesse Belden, Joseph R. Nielson, David J. Daily, Scott L. Thomson.
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Play Button
Phase Contrast and Differential Interference Contrast (DIC) Microscopy
Authors: Victoria Centonze Frohlich.
Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA).
Phase-contrast microscopy is often used to produce contrast for transparent, non light-absorbing, biological specimens. The technique was discovered by Zernike, in 1942, who received the Nobel prize for his achievement. DIC microscopy, introduced in the late 1960s, has been popular in biomedical research because it highlights edges of specimen structural detail, provides high-resolution optical sections of thick specimens including tissue cells, eggs, and embryos and does not suffer from the phase halos typical of phase-contrast images. This protocol highlights the principles and practical applications of these microscopy techniques.
Basic protocols, Issue 18, Current Protocols Wiley, Microscopy, Phase Contrast, Difference Interference Contrast
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.