JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Verbal Autopsy: Evaluation of Methods to Certify Causes of Death in Uganda.
PUBLISHED: 06-19-2015
To assess different methods for determining cause of death from verbal autopsy (VA) questionnaire data, the intra-rater reliability of Physician-Certified Verbal Autopsy (PCVA) and the accuracy of PCVA, expert-derived (non-hierarchical) and data-driven (hierarchal) algorithms were assessed for determining common causes of death in Ugandan children. A verbal autopsy validation study was conducted from 2008-2009 in three different sites in Uganda. The dataset included 104 neonatal deaths (0-27 days) and 615 childhood deaths (1-59 months) with the cause(s) of death classified by PCVA and physician review of hospital medical records (the 'reference standard'). Of the original 719 questionnaires, 141 (20%) were selected for a second review by the same physicians; the repeat cause(s) of death were compared to the original,and agreement assessed using the Kappa statistic.Physician reviewers' refined non-hierarchical algorithms for common causes of death from existing expert algorithms, from which, hierarchal algorithms were developed. The accuracy of PCVA, non-hierarchical, and hierarchical algorithms for determining cause(s) of death from all 719 VA questionnaires was determined using the reference standard. Overall, intra-rater repeatability was high (83% agreement, Kappa 0.79 [95% CI 0.76-0.82]). PCVA performed well, with high specificity for determining cause of neonatal (>67%), and childhood (>83%) deaths, resulting in fairly accurate cause-specific mortality fraction (CSMF) estimates. For most causes of death in children, non-hierarchical algorithms had higher sensitivity, but correspondingly lower specificity, than PCVA and hierarchical algorithms, resulting in inaccurate CSMF estimates. Hierarchical algorithms were specific for most causes of death, and CSMF estimates were comparable to the reference standard and PCVA. Inter-rater reliability of PCVA was high, and overall PCVA performed well. Hierarchical algorithms performed better than non-hierarchical algorithms due to higher specificity and more accurate CSMF estimates. Use of PCVA to determine cause of death from VA questionnaire data is reasonable while automated data-driven algorithms are improved.
Authors: Martin S Angst, Martha Tingle, Nicholas G Phillips, Brendan Carvalho.
Published: 01-14-2009
In a previous article in the Journal of Visualized Experiments we have demonstrated skin microdialysis techniques for the collection of tissue-specific nociceptive and inflammatory biochemicals in humans. In this article we will show pain-testing paradigms that are often used in tandem with microdialysis procedures. Combining pain tests with microdialysis provides the critical link between behavioral and biochemical data that allows identifying key biochemicals responsible for generating and propagating pain. Two models of evoking pain in inflamed skin of human study participants are shown. The first model evokes pain with aid of heat stimuli. Heat evoked pain as described here is predominantly mediated by small, non-myelinated peripheral nociceptive nerve fibers (C-fibers). The second model evokes pain via punctuated pressure stimuli. Punctuated pressure evoked pain is predominantly mediated by small, myelinated peripheral nociceptive nerve fibers (A-delta fibers). The two models are mechanistically distinct and independently examine nociceptive processing by the two major peripheral nerve fiber populations involved in pain signaling. Heat pain is evoked with aid of the TSA II, a commercially available thermo-sensory analyzer (Medoc Advanced Medical Systems, Durham, NC). Stimulus configuration and delivery is handled with aid of specific software. Thermodes vary in size and shape but in principle consist of a metal plate that can be heated or cooled at various rates and for different periods of time. Algorithms assessing heat-evoked pain are manifold. In the experiments shown here, study participants are asked to indicate at what point they start experiencing pain while the thermode in contact with skin is heated at a predetermined rate starting at a temperature that does not evoke pain. The thermode temperature at which a subject starts experiencing pain constitutes the heat pain threshold. Mechanical pain is evoked with punctuated probes. Such probes are commercially available from several manufacturers (von Frey hairs). However, the accuracy of von Frey hairs has been criticized and many investigators use custom made punctuated pressure probes. In the experiments shown here eight custom-made punctuated probes of different weights are applied in consecutive order, a procedure called up-down algorithm, to identify perceptional deflection points, i.e., a change from feeling no pain to feeling pain or vice versa. The average weight causing a perceptional deflection constitutes the mechanical pain threshold.
28 Related JoVE Articles!
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Play Button
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Authors: Bianca DeBenedictis, J. Bruce Morton.
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Tissue Triage and Freezing for Models of Skeletal Muscle Disease
Authors: Hui Meng, Paul M.L. Janssen, Robert W. Grange, Lin Yang, Alan H. Beggs, Lindsay C. Swanson, Stacy A. Cossette, Alison Frase, Martin K. Childers, Henk Granzier, Emanuela Gussoni, Michael W. Lawlor.
Institutions: Medical College of Wisconsin, The Ohio State University, Virginia Tech, University of Kentucky, Boston Children's Hospital, Harvard Medical School, Cure Congenital Muscular Dystrophy, Joshua Frase Foundation, University of Washington, University of Arizona.
Skeletal muscle is a unique tissue because of its structure and function, which requires specific protocols for tissue collection to obtain optimal results from functional, cellular, molecular, and pathological evaluations. Due to the subtlety of some pathological abnormalities seen in congenital muscle disorders and the potential for fixation to interfere with the recognition of these features, pathological evaluation of frozen muscle is preferable to fixed muscle when evaluating skeletal muscle for congenital muscle disease. Additionally, the potential to produce severe freezing artifacts in muscle requires specific precautions when freezing skeletal muscle for histological examination that are not commonly used when freezing other tissues. This manuscript describes a protocol for rapid freezing of skeletal muscle using isopentane (2-methylbutane) cooled with liquid nitrogen to preserve optimal skeletal muscle morphology. This procedure is also effective for freezing tissue intended for genetic or protein expression studies. Furthermore, we have integrated our freezing protocol into a broader procedure that also describes preferred methods for the short term triage of tissue for (1) single fiber functional studies and (2) myoblast cell culture, with a focus on the minimum effort necessary to collect tissue and transport it to specialized research or reference labs to complete these studies. Overall, this manuscript provides an outline of how fresh tissue can be effectively distributed for a variety of phenotypic studies and thereby provides standard operating procedures (SOPs) for pathological studies related to congenital muscle disease.
Basic Protocol, Issue 89, Tissue, Freezing, Muscle, Isopentane, Pathology, Functional Testing, Cell Culture
Play Button
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Authors: Noah S. Philip, S. Louisa Carpenter, Lawrence H. Sweet.
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD). Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions. Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
A Neuroscientific Approach to the Examination of Concussions in Student-Athletes
Authors: Caroline J. Ketcham, Eric Hall, Walter R. Bixby, Srikant Vallabhajosula, Stephen E. Folger, Matthew C. Kostek, Paul C. Miller, Kenneth P. Barnes, Kirtida Patel.
Institutions: Elon University, Elon University, Duquesne University, Elon University.
Concussions are occurring at alarming rates in the United States and have become a serious public health concern. The CDC estimates that 1.6 to 3.8 million concussions occur in sports and recreational activities annually. Concussion as defined by the 2013 Concussion Consensus Statement “may be caused either by a direct blow to the head, face, neck or elsewhere on the body with an ‘impulsive’ force transmitted to the head.” Concussions leave the individual with both short- and long-term effects. The short-term effects of sport related concussions may include changes in playing ability, confusion, memory disturbance, the loss of consciousness, slowing of reaction time, loss of coordination, headaches, dizziness, vomiting, changes in sleep patterns and mood changes. These symptoms typically resolve in a matter of days. However, while some individuals recover from a single concussion rather quickly, many experience lingering effects that can last for weeks or months. The factors related to concussion susceptibility and the subsequent recovery times are not well known or understood at this time. Several factors have been suggested and they include the individual’s concussion history, the severity of the initial injury, history of migraines, history of learning disabilities, history of psychiatric comorbidities, and possibly, genetic factors. Many studies have individually investigated certain factors both the short-term and long-term effects of concussions, recovery time course, susceptibility and recovery. What has not been clearly established is an effective multifaceted approach to concussion evaluation that would yield valuable information related to the etiology, functional changes, and recovery. The purpose of this manuscript is to show one such multifaceted approached which examines concussions using computerized neurocognitive testing, event related potentials, somatosensory perceptual responses, balance assessment, gait assessment and genetic testing.
Medicine, Issue 94, Concussions, Student-Athletes, Mild Traumatic Brain Injury, Genetics, Cognitive Function, Balance, Gait, Somatosensory
Play Button
Practical Methodology of Cognitive Tasks Within a Navigational Assessment
Authors: Manon Robillard, Chantal Mayer-Crittenden, Annie Roy-Charland, Michèle Minor-Corriveau, Roxanne Bélanger.
Institutions: Laurentian University, Laurentian University.
This paper describes an approach for measuring navigation accuracy relative to cognitive skills. The methodology behind the assessment will thus be clearly outlined in a step-by-step manner. Navigational skills are important when trying to find symbols within a speech-generating device (SGD) that has a dynamic screen and taxonomical organization. The following skills have been found to impact children’s ability to find symbols when navigating within the levels of an SGD: sustained attention, categorization, cognitive flexibility, and fluid reasoning1,2. According to past studies, working memory was not correlated with navigation1,2. The materials needed for this method include a computerized tablet, an augmentative and alternative communication application, a booklet of symbols, and the Leiter International Performance Scale-Revised (Leiter-R)3. This method has been used in two previous studies. Robillard, Mayer-Crittenden, Roy-Charland, Minor-Corriveau and Bélanger1 assessed typically developing children, while Rondeau, Robillard and Roy-Charland2 assessed children and adolescents with a diagnosis of Autism Spectrum Disorder. The direct observation of this method will facilitate the replication of this study for researchers. It will also help clinicians that work with children who have complex communication needs to determine the children’s ability to navigate an SGD with taxonomical categorization.
Behavior, Issue 100, Augmentative and alternative communication, navigation, cognition, assessment, speech-language pathology, children
Play Button
Modulating Cognition Using Transcranial Direct Current Stimulation of the Cerebellum
Authors: Paul A. Pope.
Institutions: University of Birmingham.
Numerous studies have emerged recently that demonstrate the possibility of modulating, and in some cases enhancing, cognitive processes by exciting brain regions involved in working memory and attention using transcranial electrical brain stimulation. Some researchers now believe the cerebellum supports cognition, possibly via a remote neuromodulatory effect on the prefrontal cortex. This paper describes a procedure for investigating a role for the cerebellum in cognition using transcranial direct current stimulation (tDCS), and a selection of information-processing tasks of varying task difficulty, which have previously been shown to involve working memory, attention and cerebellar functioning. One task is called the Paced Auditory Serial Addition Task (PASAT) and the other a novel variant of this task called the Paced Auditory Serial Subtraction Task (PASST). A verb generation task and its two controls (noun and verb reading) were also investigated. All five tasks were performed by three separate groups of participants, before and after the modulation of cortico-cerebellar connectivity using anodal, cathodal or sham tDCS over the right cerebellar cortex. The procedure demonstrates how performance (accuracy, verbal response latency and variability) could be selectively improved after cathodal stimulation, but only during tasks that the participants rated as difficult, and not easy. Performance was unchanged by anodal or sham stimulation. These findings demonstrate a role for the cerebellum in cognition, whereby activity in the left prefrontal cortex is likely dis-inhibited by cathodal tDCS over the right cerebellar cortex. Transcranial brain stimulation is growing in popularity in various labs and clinics. However, the after-effects of tDCS are inconsistent between individuals and not always polarity-specific, and may even be task- or load-specific, all of which requires further study. Future efforts might also be guided towards neuro-enhancement in cerebellar patients presenting with cognitive impairment once a better understanding of brain stimulation mechanisms has emerged.
Behavior, Issue 96, Cognition, working memory, tDCS, cerebellum, brain stimulation, neuro-modulation, neuro-enhancement
Play Button
A Piglet Model of Neonatal Hypoxic-Ischemic Encephalopathy
Authors: Kasper J. Kyng, Torjus Skajaa, Sigrid Kerrn-Jespersen, Christer S. Andreassen, Kristine Bennedsgaard, Tine B. Henriksen.
Institutions: Institute of Clinical Medicine, Aarhus University Hospital, Institute of Clinical Medicine, Aarhus University Hospital.
Birth asphyxia, which causes hypoxic-ischemic encephalopathy (HIE), accounts for 0.66 million deaths worldwide each year, about a quarter of the world’s 2.9 million neonatal deaths. Animal models of HIE have contributed to the understanding of the pathophysiology in HIE, and have highlighted the dynamic process that occur in brain injury due to perinatal asphyxia. Thus, animal studies have suggested a time-window for post-insult treatment strategies. Hypothermia has been tested as a treatment for HIE in pdiglet models and subsequently proven effective in clinical trials. Variations of the model have been applied in the study of adjunctive neuroprotective methods and piglet studies of xenon and melatonin have led to clinical phase I and II trials1,2. The piglet HIE model is further used for neonatal resuscitation- and hemodynamic studies as well as in investigations of cerebral hypoxia on a cellular level. However, it is a technically challenging model and variations in the protocol may result in either too mild or too severe brain injury. In this article, we demonstrate the technical procedures necessary for establishing a stable piglet model of neonatal HIE. First, the newborn piglet (< 24 hr old, median weight 1500 g) is anesthetized, intubated, and monitored in a setup comparable to that found in a neonatal intensive care unit. Global hypoxia-ischemia is induced by lowering the inspiratory oxygen fraction to achieve global hypoxia, ischemia through hypotension and a flat trace amplitude integrated EEG (aEEG) indicative of cerebral hypoxia. Survival is promoted by adjusting oxygenation according to the aEEG response and blood pressure. Brain injury is quantified by histopathology and magnetic resonance imaging after 72 hr.
Medicine, Issue 99, Piglet, swine, neonatal, hypoxic-ischemic encephalopathy (HIE), asphyxia, hypoxia, amplitude integrated EEG (aEEG), neuroscience, brain injury
Play Button
An Orthotopic Murine Model of Human Prostate Cancer Metastasis
Authors: Janet Pavese, Irene M. Ogden, Raymond C. Bergan.
Institutions: Northwestern University, Northwestern University, Northwestern University.
Our laboratory has developed a novel orthotopic implantation model of human prostate cancer (PCa). As PCa death is not due to the primary tumor, but rather the formation of distinct metastasis, the ability to effectively model this progression pre-clinically is of high value. In this model, cells are directly implanted into the ventral lobe of the prostate in Balb/c athymic mice, and allowed to progress for 4-6 weeks. At experiment termination, several distinct endpoints can be measured, such as size and molecular characterization of the primary tumor, the presence and quantification of circulating tumor cells in the blood and bone marrow, and formation of metastasis to the lung. In addition to a variety of endpoints, this model provides a picture of a cells ability to invade and escape the primary organ, enter and survive in the circulatory system, and implant and grow in a secondary site. This model has been used effectively to measure metastatic response to both changes in protein expression as well as to response to small molecule therapeutics, in a short turnaround time.
Medicine, Issue 79, Urogenital System, Male Urogenital Diseases, Surgical Procedures, Operative, Life Sciences (General), Prostate Cancer, Metastasis, Mouse Model, Drug Discovery, Molecular Biology
Play Button
Pulse Wave Velocity Testing in the Baltimore Longitudinal Study of Aging
Authors: Melissa David, Omar Malti, Majd AlGhatrif, Jeanette Wright, Marco Canepa, James B. Strait.
Institutions: National Institute of Aging.
Carotid-femoral pulse wave velocity is considered the gold standard for measurements of central arterial stiffness obtained through noninvasive methods1. Subjects are placed in the supine position and allowed to rest quietly for at least 10 min prior to the start of the exam. The proper cuff size is selected and a blood pressure is obtained using an oscillometric device. Once a resting blood pressure has been obtained, pressure waveforms are acquired from the right femoral and right common carotid arteries. The system then automatically calculates the pulse transit time between these two sites (using the carotid artery as a surrogate for the descending aorta). Body surface measurements are used to determine the distance traveled by the pulse wave between the two sampling sites. This distance is then divided by the pulse transit time resulting in the pulse wave velocity. The measurements are performed in triplicate and the average is used for analysis.
Medicine, Issue 84, Pulse Wave Velocity (PWV), Pulse Wave Analysis (PWA), Arterial stiffness, Aging, Cardiovascular, Carotid-femoral pulse
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
An Investigation of the Effects of Sports-related Concussion in Youth Using Functional Magnetic Resonance Imaging and the Head Impact Telemetry System
Authors: Michelle Keightley, Stephanie Green, Nick Reed, Sabrina Agnihotri, Amy Wilkinson, Nancy Lobaugh.
Institutions: University of Toronto, University of Toronto, University of Toronto, Bloorview Kids Rehab, Toronto Rehab, Sunnybrook Health Sciences Centre, University of Toronto.
One of the most commonly reported injuries in children who participate in sports is concussion or mild traumatic brain injury (mTBI)1. Children and youth involved in organized sports such as competitive hockey are nearly six times more likely to suffer a severe concussion compared to children involved in other leisure physical activities2. While the most common cognitive sequelae of mTBI appear similar for children and adults, the recovery profile and breadth of consequences in children remains largely unknown2, as does the influence of pre-injury characteristics (e.g. gender) and injury details (e.g. magnitude and direction of impact) on long-term outcomes. Competitive sports, such as hockey, allow the rare opportunity to utilize a pre-post design to obtain pre-injury data before concussion occurs on youth characteristics and functioning and to relate this to outcome following injury. Our primary goals are to refine pediatric concussion diagnosis and management based on research evidence that is specific to children and youth. To do this we use new, multi-modal and integrative approaches that will: 1.Evaluate the immediate effects of head trauma in youth 2.Monitor the resolution of post-concussion symptoms (PCS) and cognitive performance during recovery 3.Utilize new methods to verify brain injury and recovery To achieve our goals, we have implemented the Head Impact Telemetry (HIT) System. (Simbex; Lebanon, NH, USA). This system equips commercially available Easton S9 hockey helmets (Easton-Bell Sports; Van Nuys, CA, USA) with single-axis accelerometers designed to measure real-time head accelerations during contact sport participation 3 - 5. By using telemetric technology, the magnitude of acceleration and location of all head impacts during sport participation can be objectively detected and recorded. We also use functional magnetic resonance imaging (fMRI) to localize and assess changes in neural activity specifically in the medial temporal and frontal lobes during the performance of cognitive tasks, since those are the cerebral regions most sensitive to concussive head injury 6. Finally, we are acquiring structural imaging data sensitive to damage in brain white matter.
Medicine, Issue 47, Mild traumatic brain injury, concussion, fMRI, youth, Head Impact Telemetry System
Play Button
Manual Muscle Testing: A Method of Measuring Extremity Muscle Strength Applied to Critically Ill Patients
Authors: Nancy Ciesla, Victor Dinglas, Eddy Fan, Michelle Kho, Jill Kuramoto, Dale Needham.
Institutions: Johns Hopkins University, Johns Hopkins Hospital , Johns Hopkins University, University of Maryland Medical System.
Survivors of acute respiratory distress syndrome (ARDS) and other causes of critical illness often have generalized weakness, reduced exercise tolerance, and persistent nerve and muscle impairments after hospital discharge.1-6 Using an explicit protocol with a structured approach to training and quality assurance of research staff, manual muscle testing (MMT) is a highly reliable method for assessing strength, using a standardized clinical examination, for patients following ARDS, and can be completed with mechanically ventilated patients who can tolerate sitting upright in bed and are able to follow two-step commands. 7, 8 This video demonstrates a protocol for MMT, which has been taught to ≥43 research staff who have performed >800 assessments on >280 ARDS survivors. Modifications for the bedridden patient are included. Each muscle is tested with specific techniques for positioning, stabilization, resistance, and palpation for each score of the 6-point ordinal Medical Research Council scale.7,9-11 Three upper and three lower extremity muscles are graded in this protocol: shoulder abduction, elbow flexion, wrist extension, hip flexion, knee extension, and ankle dorsiflexion. These muscles were chosen based on the standard approach for evaluating patients for ICU-acquired weakness used in prior publications. 1,2.
Medicine, Issue 50, Muscle Strength, Critical illness, Intensive Care Units, Reproducibility of Results, Clinical Protocols.
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Play Button
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Authors: Karin Hauffen, Eugene Bart, Mark Brady, Daniel Kersten, Jay Hegdé.
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Play Button
Improving IV Insulin Administration in a Community Hospital
Authors: Michael C. Magee.
Institutions: Wyoming Medical Center.
Diabetes mellitus is a major independent risk factor for increased morbidity and mortality in the hospitalized patient, and elevated blood glucose concentrations, even in non-diabetic patients, predicts poor outcomes.1-4 The 2008 consensus statement by the American Association of Clinical Endocrinologists (AACE) and the American Diabetes Association (ADA) states that "hyperglycemia in hospitalized patients, irrespective of its cause, is unequivocally associated with adverse outcomes."5 It is important to recognize that hyperglycemia occurs in patients with known or undiagnosed diabetes as well as during acute illness in those with previously normal glucose tolerance. The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study involved over six thousand adult intensive care unit (ICU) patients who were randomized to intensive glucose control or conventional glucose control.6 Surprisingly, this trial found that intensive glucose control increased the risk of mortality by 14% (odds ratio, 1.14; p=0.02). In addition, there was an increased prevalence of severe hypoglycemia in the intensive control group compared with the conventional control group (6.8% vs. 0.5%, respectively; p<0.001). From this pivotal trial and two others,7,8 Wyoming Medical Center (WMC) realized the importance of controlling hyperglycemia in the hospitalized patient while avoiding the negative impact of resultant hypoglycemia. Despite multiple revisions of an IV insulin paper protocol, analysis of data from usage of the paper protocol at WMC shows that in terms of achieving normoglycemia while minimizing hypoglycemia, results were suboptimal. Therefore, through a systematical implementation plan, monitoring of patient blood glucose levels was switched from using a paper IV insulin protocol to a computerized glucose management system. By comparing blood glucose levels using the paper protocol to that of the computerized system, it was determined, that overall, the computerized glucose management system resulted in more rapid and tighter glucose control than the traditional paper protocol. Specifically, a substantial increase in the time spent within the target blood glucose concentration range, as well as a decrease in the prevalence of severe hypoglycemia (BG < 40 mg/dL), clinical hypoglycemia (BG < 70 mg/dL), and hyperglycemia (BG > 180 mg/dL), was witnessed in the first five months after implementation of the computerized glucose management system. The computerized system achieved target concentrations in greater than 75% of all readings while minimizing the risk of hypoglycemia. The prevalence of hypoglycemia (BG < 70 mg/dL) with the use of the computer glucose management system was well under 1%.
Medicine, Issue 64, Physiology, Computerized glucose management, Endotool, hypoglycemia, hyperglycemia, diabetes, IV insulin, paper protocol, glucose control
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Play Button
Determining 3D Flow Fields via Multi-camera Light Field Imaging
Authors: Tadd T. Truscott, Jesse Belden, Joseph R. Nielson, David J. Daily, Scott L. Thomson.
Institutions: Brigham Young University, Naval Undersea Warfare Center, Newport, RI.
In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.
Physics, Issue 73, Mechanical Engineering, Fluid Mechanics, Engineering, synthetic aperture imaging, light field, camera array, particle image velocimetry, three dimensional, vector fields, image processing, auto calibration, vocal chords, bubbles, flow, fluids
Play Button
Movement Retraining using Real-time Feedback of Performance
Authors: Michael Anthony Hunt.
Institutions: University of British Columbia .
Any modification of movement - especially movement patterns that have been honed over a number of years - requires re-organization of the neuromuscular patterns responsible for governing the movement performance. This motor learning can be enhanced through a number of methods that are utilized in research and clinical settings alike. In general, verbal feedback of performance in real-time or knowledge of results following movement is commonly used clinically as a preliminary means of instilling motor learning. Depending on patient preference and learning style, visual feedback (e.g. through use of a mirror or different types of video) or proprioceptive guidance utilizing therapist touch, are used to supplement verbal instructions from the therapist. Indeed, a combination of these forms of feedback is commonplace in the clinical setting to facilitate motor learning and optimize outcomes. Laboratory-based, quantitative motion analysis has been a mainstay in research settings to provide accurate and objective analysis of a variety of movements in healthy and injured populations. While the actual mechanisms of capturing the movements may differ, all current motion analysis systems rely on the ability to track the movement of body segments and joints and to use established equations of motion to quantify key movement patterns. Due to limitations in acquisition and processing speed, analysis and description of the movements has traditionally occurred offline after completion of a given testing session. This paper will highlight a new supplement to standard motion analysis techniques that relies on the near instantaneous assessment and quantification of movement patterns and the display of specific movement characteristics to the patient during a movement analysis session. As a result, this novel technique can provide a new method of feedback delivery that has advantages over currently used feedback methods.
Medicine, Issue 71, Biophysics, Anatomy, Physiology, Physics, Biomedical Engineering, Behavior, Psychology, Kinesiology, Physical Therapy, Musculoskeletal System, Biofeedback, biomechanics, gait, movement, walking, rehabilitation, clinical, training
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
High-speed Particle Image Velocimetry Near Surfaces
Authors: Louise Lu, Volker Sick.
Institutions: University of Michigan.
Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included.
Physics, Issue 76, Mechanical Engineering, Fluid Mechanics, flow measurement, fluid heat transfer, internal flow in turbomachinery (applications), boundary layer flow (general), flow visualization (instrumentation), laser instruments (design and operation), Boundary layer, micro-PIV, optical laser diagnostics, internal combustion engines, flow, fluids, particle, velocimetry, visualization
Play Button
Testing Sensory and Multisensory Function in Children with Autism Spectrum Disorder
Authors: Sarah H. Baum, Ryan A. Stevenson, Mark T. Wallace.
Institutions: Vanderbilt University Medical Center, University of Toronto, Vanderbilt University.
In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system's judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.
Behavior, Issue 98, Temporal processing, multisensory integration, psychophysics, computer based assessments, sensory deficits, autism spectrum disorder
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.