JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Effects of the training dataset characteristics on the performance of nine species distribution models: application to Diabrotica virgifera virgifera.
PUBLISHED: 02-08-2011
Many distribution models developed to predict the presence/absence of invasive alien species need to be fitted to a training dataset before practical use. The training dataset is characterized by the number of recorded presences/absences and by their geographical locations. The aim of this paper is to study the effect of the training dataset characteristics on model performance and to compare the relative importance of three factors influencing model predictive capability; size of training dataset, stage of the biological invasion, and choice of input variables. Nine models were assessed for their ability to predict the distribution of the western corn rootworm, Diabrotica virgifera virgifera, a major pest of corn in North America that has recently invaded Europe. Twenty-six training datasets of various sizes (from 10 to 428 presence records) corresponding to two different stages of invasion (1955 and 1980) and three sets of input bioclimatic variables (19 variables, six variables selected using information on insect biology, and three linear combinations of 19 variables derived from Principal Component Analysis) were considered. The models were fitted to each training dataset in turn and their performance was assessed using independent data from North America and Europe. The models were ranked according to the area under the Receiver Operating Characteristic curve and the likelihood ratio. Model performance was highly sensitive to the geographical area used for calibration; most of the models performed poorly when fitted to a restricted area corresponding to an early stage of the invasion. Our results also showed that Principal Component Analysis was useful in reducing the number of model input variables for the models that performed poorly with 19 input variables. DOMAIN, Environmental Distance, MAXENT, and Envelope Score were the most accurate models but all the models tested in this study led to a substantial rate of mis-classification.
Authors: Damien O'Halloran.
Published: 02-05-2014
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
21 Related JoVE Articles!
Play Button
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Authors: Mackenzie J. Denyes, Michèle A. Parisien, Allison Rutter, Barbara A. Zeeb.
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g. carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Computerized Dynamic Posturography for Postural Control Assessment in Patients with Intermittent Claudication
Authors: Natalie Vanicek, Stephanie A. King, Risha Gohil, Ian C. Chetter, Patrick A Coughlin.
Institutions: University of Sydney, University of Hull, Hull and East Yorkshire Hospitals, Addenbrookes Hospital.
Computerized dynamic posturography with the EquiTest is an objective technique for measuring postural strategies under challenging static and dynamic conditions. As part of a diagnostic assessment, the early detection of postural deficits is important so that appropriate and targeted interventions can be prescribed. The Sensory Organization Test (SOT) on the EquiTest determines an individual's use of the sensory systems (somatosensory, visual, and vestibular) that are responsible for postural control. Somatosensory and visual input are altered by the calibrated sway-referenced support surface and visual surround, which move in the anterior-posterior direction in response to the individual's postural sway. This creates a conflicting sensory experience. The Motor Control Test (MCT) challenges postural control by creating unexpected postural disturbances in the form of backwards and forwards translations. The translations are graded in magnitude and the time to recover from the perturbation is computed. Intermittent claudication, the most common symptom of peripheral arterial disease, is characterized by a cramping pain in the lower limbs and caused by muscle ischemia secondary to reduced blood flow to working muscles during physical exertion. Claudicants often display poor balance, making them susceptible to falls and activity avoidance. The Ankle Brachial Pressure Index (ABPI) is a noninvasive method for indicating the presence of peripheral arterial disease and intermittent claudication, a common symptom in the lower extremities. ABPI is measured as the highest systolic pressure from either the dorsalis pedis or posterior tibial artery divided by the highest brachial artery systolic pressure from either arm. This paper will focus on the use of computerized dynamic posturography in the assessment of balance in claudicants.
Medicine, Issue 82, Posture, Computerized dynamic posturography, Ankle brachial pressure index, Peripheral arterial disease, Intermittent claudication, Balance, Posture, EquiTest, Sensory Organization Test, Motor Control Test
Play Button
Determining the Contribution of the Energy Systems During Exercise
Authors: Guilherme G. Artioli, Rômulo C. Bertuzzi, Hamilton Roschel, Sandro H. Mendes, Antonio H. Lancha Jr., Emerson Franchini.
Institutions: University of Sao Paulo, University of Sao Paulo, University of Sao Paulo, University of Sao Paulo.
One of the most important aspects of the metabolic demand is the relative contribution of the energy systems to the total energy required for a given physical activity. Although some sports are relatively easy to be reproduced in a laboratory (e.g., running and cycling), a number of sports are much more difficult to be reproduced and studied in controlled situations. This method presents how to assess the differential contribution of the energy systems in sports that are difficult to mimic in controlled laboratory conditions. The concepts shown here can be adapted to virtually any sport. The following physiologic variables will be needed: rest oxygen consumption, exercise oxygen consumption, post-exercise oxygen consumption, rest plasma lactate concentration and post-exercise plasma peak lactate. To calculate the contribution of the aerobic metabolism, you will need the oxygen consumption at rest and during the exercise. By using the trapezoidal method, calculate the area under the curve of oxygen consumption during exercise, subtracting the area corresponding to the rest oxygen consumption. To calculate the contribution of the alactic anaerobic metabolism, the post-exercise oxygen consumption curve has to be adjusted to a mono or a bi-exponential model (chosen by the one that best fits). Then, use the terms of the fitted equation to calculate anaerobic alactic metabolism, as follows: ATP-CP metabolism = A1 (mL . s-1) x t1 (s). Finally, to calculate the contribution of the lactic anaerobic system, multiply peak plasma lactate by 3 and by the athlete’s body mass (the result in mL is then converted to L and into kJ). The method can be used for both continuous and intermittent exercise. This is a very interesting approach as it can be adapted to exercises and sports that are difficult to be mimicked in controlled environments. Also, this is the only available method capable of distinguishing the contribution of three different energy systems. Thus, the method allows the study of sports with great similarity to real situations, providing desirable ecological validity to the study.
Physiology, Issue 61, aerobic metabolism, anaerobic alactic metabolism, anaerobic lactic metabolism, exercise, athletes, mathematical model
Play Button
Assessment of Motor Balance and Coordination in Mice using the Balance Beam
Authors: Tinh N. Luong, Holly J. Carlisle, Amber Southwell, Paul H. Patterson.
Institutions: California Institute of Technology.
Brain injury, genetic manipulations, and pharmacological treatments can result in alterations of motor skills in mice. Fine motor coordination and balance can be assessed by the beam walking assay. The goal of this test is for the mouse to stay upright and walk across an elevated narrow beam to a safe platform. This test takes place over 3 consecutive days: 2 days of training and 1 day of testing. Performance on the beam is quantified by measuring the time it takes for the mouse to traverse the beam and the number of paw slips that occur in the process. Here we report the protocol used in our laboratory, and representative results from a cohort of C57BL/6 mice. This task is particularly useful for detecting subtle deficits in motor skills and balance that may not be detected by other motor tests, such as the Rotarod.
Neuroscience, Issue 49, motor skills, coordination, balance beam test, mouse behavior
Play Button
Movement Retraining using Real-time Feedback of Performance
Authors: Michael Anthony Hunt.
Institutions: University of British Columbia .
Any modification of movement - especially movement patterns that have been honed over a number of years - requires re-organization of the neuromuscular patterns responsible for governing the movement performance. This motor learning can be enhanced through a number of methods that are utilized in research and clinical settings alike. In general, verbal feedback of performance in real-time or knowledge of results following movement is commonly used clinically as a preliminary means of instilling motor learning. Depending on patient preference and learning style, visual feedback (e.g. through use of a mirror or different types of video) or proprioceptive guidance utilizing therapist touch, are used to supplement verbal instructions from the therapist. Indeed, a combination of these forms of feedback is commonplace in the clinical setting to facilitate motor learning and optimize outcomes. Laboratory-based, quantitative motion analysis has been a mainstay in research settings to provide accurate and objective analysis of a variety of movements in healthy and injured populations. While the actual mechanisms of capturing the movements may differ, all current motion analysis systems rely on the ability to track the movement of body segments and joints and to use established equations of motion to quantify key movement patterns. Due to limitations in acquisition and processing speed, analysis and description of the movements has traditionally occurred offline after completion of a given testing session. This paper will highlight a new supplement to standard motion analysis techniques that relies on the near instantaneous assessment and quantification of movement patterns and the display of specific movement characteristics to the patient during a movement analysis session. As a result, this novel technique can provide a new method of feedback delivery that has advantages over currently used feedback methods.
Medicine, Issue 71, Biophysics, Anatomy, Physiology, Physics, Biomedical Engineering, Behavior, Psychology, Kinesiology, Physical Therapy, Musculoskeletal System, Biofeedback, biomechanics, gait, movement, walking, rehabilitation, clinical, training
Play Button
The 5-Choice Serial Reaction Time Task: A Task of Attention and Impulse Control for Rodents
Authors: Samuel K. Asinof, Tracie A. Paine.
Institutions: Oberlin College.
This protocol describes the 5-choice serial reaction time task, which is an operant based task used to study attention and impulse control in rodents. Test day challenges, modifications to the standard task, can be used to systematically tax the neural systems controlling either attention or impulse control. Importantly, these challenges have consistent effects on behavior across laboratories in intact animals and can reveal either enhancements or deficits in cognitive function that are not apparent when rats are only tested on the standard task. The variety of behavioral measures that are collected can be used to determine if other factors (i.e., sedation, motivation deficits, locomotor impairments) are contributing to changes in performance. The versatility of the 5CSRTT is further enhanced because it is amenable to combination with pharmacological, molecular, and genetic techniques.
Neuroscience, Issue 90, attention, impulse control, neuroscience, cognition, rodent
Play Button
Quantification of Orofacial Phenotypes in Xenopus
Authors: Allyson E. Kennedy, Amanda J. Dickinson.
Institutions: Virginia Commonwealth University.
Xenopus has become an important tool for dissecting the mechanisms governing craniofacial development and defects. A method to quantify orofacial development will allow for more rigorous analysis of orofacial phenotypes upon abrogation with substances that can genetically or molecularly manipulate gene expression or protein function. Using two dimensional images of the embryonic heads, traditional size dimensions-such as orofacial width, height and area- are measured. In addition, a roundness measure of the embryonic mouth opening is used to describe the shape of the mouth. Geometric morphometrics of these two dimensional images is also performed to provide a more sophisticated view of changes in the shape of the orofacial region. Landmarks are assigned to specific points in the orofacial region and coordinates are created. A principle component analysis is used to reduce landmark coordinates to principle components that then discriminate the treatment groups. These results are displayed as a scatter plot in which individuals with similar orofacial shapes cluster together. It is also useful to perform a discriminant function analysis, which statistically compares the positions of the landmarks between two treatment groups. This analysis is displayed on a transformation grid where changes in landmark position are viewed as vectors. A grid is superimposed on these vectors so that a warping pattern is displayed to show where significant landmark positions have changed. Shape changes in the discriminant function analysis are based on a statistical measure, and therefore can be evaluated by a p-value. This analysis is simple and accessible, requiring only a stereoscope and freeware software, and thus will be a valuable research and teaching resource.
Developmental Biology, Issue 93, Orofacial quantification, geometric morphometrics, Xenopus, orofacial development, orofacial defects, shape changes, facial dimensions
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Play Button
Getting to Compliance in Forced Exercise in Rodents: A Critical Standard to Evaluate Exercise Impact in Aging-related Disorders and Disease
Authors: Jennifer C. Arnold, Michael F. Salvatore.
Institutions: Louisiana State University Health Sciences Center.
There is a major increase in the awareness of the positive impact of exercise on improving several disease states with neurobiological basis; these include improving cognitive function and physical performance. As a result, there is an increase in the number of animal studies employing exercise. It is argued that one intrinsic value of forced exercise is that the investigator has control over the factors that can influence the impact of exercise on behavioral outcomes, notably exercise frequency, duration, and intensity of the exercise regimen. However, compliance in forced exercise regimens may be an issue, particularly if potential confounds of employing foot-shock are to be avoided. It is also important to consider that since most cognitive and locomotor impairments strike in the aged individual, determining impact of exercise on these impairments should consider using aged rodents with a highest possible level of compliance to ensure minimal need for test subjects. Here, the pertinent steps and considerations necessary to achieve nearly 100% compliance to treadmill exercise in an aged rodent model will be presented and discussed. Notwithstanding the particular exercise regimen being employed by the investigator, our protocol should be of use to investigators that are particularly interested in the potential impact of forced exercise on aging-related impairments, including aging-related Parkinsonism and Parkinson’s disease.
Behavior, Issue 90, Exercise, locomotor, Parkinson’s disease, aging, treadmill, bradykinesia, Parkinsonism
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
Play Button
Morris Water Maze Experiment
Authors: Joseph Nunez.
Institutions: Michigan State University (MSU).
The Morris water maze is widely used to study spatial memory and learning. Animals are placed in a pool of water that is colored opaque with powdered non-fat milk or non-toxic tempera paint, where they must swim to a hidden escape platform. Because they are in opaque water, the animals cannot see the platform, and cannot rely on scent to find the escape route. Instead, they must rely on external/extra-maze cues. As the animals become more familiar with the task, they are able to find the platform more quickly. Developed by Richard G. Morris in 1984, this paradigm has become one of the "gold standards" of behavioral neuroscience.
Behavior, Issue 19, Declarative, Hippocampus, Memory, Procedural, Rodent, Spatial Learning
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Play Button
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Authors: David C. Shih, Kevin C. Ho, Kyle M. Melnick, Ronald A. Rensink, Tobias R. Kollmann, Edgardo S. Fortuno III.
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.