JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Large-scale transportation network congestion evolution prediction using deep learning theory.
PUBLISHED: 03-18-2015
Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.
Authors: Michael Zimmerman, Eunyoung Tak, Maria Kaplan, Mercedes Susan Mandell, Holger K. Eltzschig, Almut Grenz.
Published: 08-07-2012
Acute liver injury due to ischemia can occur during several clinical procedures e.g. liver transplantation, hepatic tumor resection or trauma repair and can result in liver failure which has a high mortality rate1-2. Therefore murine studies of hepatic ischemia have become an important field of research by providing the opportunity to utilize pharmacological and genetic studies3-9. Specifically, conditional mice with tissue specific deletion of a gene (cre, flox system) provide insights into the role of proteins in particular tissues10-13 . Because of the technical difficulty associated with manually clamping the portal triad in mice, we performed a systematic evaluation using a hanging-weight system for portal triad occlusion which has been previously described3. By using a hanging-weight system we place a suture around the left branch of the portal triad without causing any damage to the hepatic lobes, since also the finest clamps available can cause hepatic tissue damage because of the close location of liver tissue to the vessels. Furthermore, the right branch of the hepatic triad is still perfused thus no intestinal congestion occurs with this technique as blood flow to the right hepatic lobes is preserved. Furthermore, the portal triad is only manipulated once throughout the entire surgical procedure. As a result, procedures like pre-conditioning, with short times of ischemia and reperfusion, can be easily performed. Systematic evaluation of this model by performing different ischemia and reperfusion times revealed a close correlation of hepatic ischemia time with liver damage as measured by alanine (ALT) and aspartate (AST) aminotransferase serum levels3,9. Taken together, these studies confirm highly reproducible liver injury when using the hanging-weight system for hepatic ischemia and intermittent reperfusion. Thus, this technique might be useful for other investigators interested in liver ischemia studies in mice. Therefore the video clip provides a detailed step-by-step description of this technique.
22 Related JoVE Articles!
Play Button
The Double-H Maze: A Robust Behavioral Test for Learning and Memory in Rodents
Authors: Robert D. Kirch, Richard C. Pinnell, Ulrich G. Hofmann, Jean-Christophe Cassel.
Institutions: University Hospital Freiburg, UMR 7364 Université de Strasbourg, CNRS, Neuropôle de Strasbourg.
Spatial cognition research in rodents typically employs the use of maze tasks, whose attributes vary from one maze to the next. These tasks vary by their behavioral flexibility and required memory duration, the number of goals and pathways, and also the overall task complexity. A confounding feature in many of these tasks is the lack of control over the strategy employed by the rodents to reach the goal, e.g., allocentric (declarative-like) or egocentric (procedural) based strategies. The double-H maze is a novel water-escape memory task that addresses this issue, by allowing the experimenter to direct the type of strategy learned during the training period. The double-H maze is a transparent device, which consists of a central alleyway with three arms protruding on both sides, along with an escape platform submerged at the extremity of one of these arms. Rats can be trained using an allocentric strategy by alternating the start position in the maze in an unpredictable manner (see protocol 1; §4.7), thus requiring them to learn the location of the platform based on the available allothetic cues. Alternatively, an egocentric learning strategy (protocol 2; §4.8) can be employed by releasing the rats from the same position during each trial, until they learn the procedural pattern required to reach the goal. This task has been proven to allow for the formation of stable memory traces. Memory can be probed following the training period in a misleading probe trial, in which the starting position for the rats alternates. Following an egocentric learning paradigm, rats typically resort to an allocentric-based strategy, but only when their initial view on the extra-maze cues differs markedly from their original position. This task is ideally suited to explore the effects of drugs/perturbations on allocentric/egocentric memory performance, as well as the interactions between these two memory systems.
Behavior, Issue 101, Double-H maze, spatial memory, procedural memory, consolidation, allocentric, egocentric, habits, rodents, video tracking system
Play Button
Closed-loop Neuro-robotic Experiments to Test Computational Properties of Neuronal Networks
Authors: Jacopo Tessadori, Michela Chiappalone.
Institutions: Istituto Italiano di Tecnologia.
Information coding in the Central Nervous System (CNS) remains unexplored. There is mounting evidence that, even at a very low level, the representation of a given stimulus might be dependent on context and history. If this is actually the case, bi-directional interactions between the brain (or if need be a reduced model of it) and sensory-motor system can shed a light on how encoding and decoding of information is performed. Here an experimental system is introduced and described in which the activity of a neuronal element (i.e., a network of neurons extracted from embryonic mammalian hippocampi) is given context and used to control the movement of an artificial agent, while environmental information is fed back to the culture as a sequence of electrical stimuli. This architecture allows a quick selection of diverse encoding, decoding, and learning algorithms to test different hypotheses on the computational properties of neuronal networks.
Neuroscience, Issue 97, Micro Electrode Arrays (MEA), in vitro cultures, coding, decoding, tetanic stimulation, spike, burst
Play Button
Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults
Authors: Madeleine E. Hackney, Kathleen McKee.
Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls
Play Button
Simultaneous Long-term Recordings at Two Neuronal Processing Stages in Behaving Honeybees
Authors: Martin Fritz Brill, Maren Reuter, Wolfgang Rössler, Martin Fritz Strube-Bloss.
Institutions: University of Würzburg.
In both mammals and insects neuronal information is processed in different higher and lower order brain centers. These centers are coupled via convergent and divergent anatomical connections including feed forward and feedback wiring. Furthermore, information of the same origin is partially sent via parallel pathways to different and sometimes into the same brain areas. To understand the evolutionary benefits as well as the computational advantages of these wiring strategies and especially their temporal dependencies on each other, it is necessary to have simultaneous access to single neurons of different tracts or neuropiles in the same preparation at high temporal resolution. Here we concentrate on honeybees by demonstrating a unique extracellular long term access to record multi unit activity at two subsequent neuropiles1, the antennal lobe (AL), the first olfactory processing stage and the mushroom body (MB), a higher order integration center involved in learning and memory formation, or two parallel neuronal tracts2 connecting the AL with the MB. The latter was chosen as an example and will be described in full. In the supporting video the construction and permanent insertion of flexible multi channel wire electrodes is demonstrated. Pairwise differential amplification of the micro wire electrode channels drastically reduces the noise and verifies that the source of the signal is closely related to the position of the electrode tip. The mechanical flexibility of the used wire electrodes allows stable invasive long term recordings over many hours up to days, which is a clear advantage compared to conventional extra and intracellular in vivo recording techniques.
Neuroscience, Issue 89, honeybee brain, olfaction, extracellular long term recordings, double recordings, differential wire electrodes, single unit, multi-unit recordings
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Authors: Kwyn A. Meagher, Benjamin N. Doblack, Mercedes Ramirez, Lilian P. Davila.
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications.  For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately.  To study the effect of local structure on the properties of these complex geometries one must develop realistic models.  To date, software packages are rather limited in creating atomistic helical models.  This work focuses on producing atomistic models of silica glass (SiO2) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented.  The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix.  With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions.  The second method involves a more robust code which allows flexibility in modeling nanohelical structures.  This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models.  Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created.  An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material.  In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures.  One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Osteopathic Manipulative Treatment as a Useful Adjunctive Tool for Pneumonia
Authors: Sheldon Yao, John Hassani, Martin Gagne, Gebe George, Wolfgang Gilliar.
Institutions: New York Institute of Technology College of Osteopathic Medicine.
Pneumonia, the inflammatory state of lung tissue primarily due to microbial infection, claimed 52,306 lives in the United States in 20071 and resulted in the hospitalization of 1.1 million patients2. With an average length of in-patient hospital stay of five days2, pneumonia and influenza comprise significant financial burden costing the United States $40.2 billion in 20053. Under the current Infectious Disease Society of America/American Thoracic Society guidelines, standard-of-care recommendations include the rapid administration of an appropriate antibiotic regiment, fluid replacement, and ventilation (if necessary). Non-standard therapies include the use of corticosteroids and statins; however, these therapies lack conclusive supporting evidence4. (Figure 1) Osteopathic Manipulative Treatment (OMT) is a cost-effective adjunctive treatment of pneumonia that has been shown to reduce patients’ length of hospital stay, duration of intravenous antibiotics, and incidence of respiratory failure or death when compared to subjects who received conventional care alone5. The use of manual manipulation techniques for pneumonia was first recorded as early as the Spanish influenza pandemic of 1918, when patients treated with standard medical care had an estimated mortality rate of 33%, compared to a 10% mortality rate in patients treated by osteopathic physicians6. When applied to the management of pneumonia, manual manipulation techniques bolster lymphatic flow, respiratory function, and immunological defense by targeting anatomical structures involved in the these systems7,8, 9, 10. The objective of this review video-article is three-fold: a) summarize the findings of randomized controlled studies on the efficacy of OMT in adult patients with diagnosed pneumonia, b) demonstrate established protocols utilized by osteopathic physicians treating pneumonia, c) elucidate the physiological mechanisms behind manual manipulation of the respiratory and lymphatic systems. Specifically, we will discuss and demonstrate four routine techniques that address autonomics, lymph drainage, and rib cage mobility: 1) Rib Raising, 2) Thoracic Pump, 3) Doming of the Thoracic Diaphragm, and 4) Muscle Energy for Rib 1.5,11
Medicine, Issue 87, Pneumonia, osteopathic manipulative medicine (OMM) and techniques (OMT), lymphatic, rib raising, thoracic pump, muscle energy, doming diaphragm, alternative treatment
Play Button
How to Ignite an Atmospheric Pressure Microwave Plasma Torch without Any Additional Igniters
Authors: Martina Leins, Sandra Gaiser, Andreas Schulz, Matthias Walker, Uwe Schumacher, Thomas Hirth.
Institutions: University of Stuttgart.
This movie shows how an atmospheric pressure plasma torch can be ignited by microwave power with no additional igniters. After ignition of the plasma, a stable and continuous operation of the plasma is possible and the plasma torch can be used for many different applications. On one hand, the hot (3,600 K gas temperature) plasma can be used for chemical processes and on the other hand the cold afterglow (temperatures down to almost RT) can be applied for surface processes. For example chemical syntheses are interesting volume processes. Here the microwave plasma torch can be used for the decomposition of waste gases which are harmful and contribute to the global warming but are needed as etching gases in growing industry sectors like the semiconductor branch. Another application is the dissociation of CO2. Surplus electrical energy from renewable energy sources can be used to dissociate CO2 to CO and O2. The CO can be further processed to gaseous or liquid higher hydrocarbons thereby providing chemical storage of the energy, synthetic fuels or platform chemicals for the chemical industry. Applications of the afterglow of the plasma torch are the treatment of surfaces to increase the adhesion of lacquer, glue or paint, and the sterilization or decontamination of different kind of surfaces. The movie will explain how to ignite the plasma solely by microwave power without any additional igniters, e.g., electric sparks. The microwave plasma torch is based on a combination of two resonators — a coaxial one which provides the ignition of the plasma and a cylindrical one which guarantees a continuous and stable operation of the plasma after ignition. The plasma can be operated in a long microwave transparent tube for volume processes or shaped by orifices for surface treatment purposes.
Engineering, Issue 98, atmospheric pressure plasma, microwave plasma, plasma ignition, resonator structure, coaxial resonator, cylindrical resonator, plasma torch, stable plasma operation, continuous plasma operation, high speed camera
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Non-invasive Assessment of Microvascular and Endothelial Function
Authors: Cynthia Cheng, Constantine Daskalakis, Bonita Falkner.
Institutions: Thomas Jefferson University , Thomas Jefferson University, Thomas Jefferson University .
The authors have utilized capillaroscopy and forearm blood flow techniques to investigate the role of microvascular dysfunction in pathogenesis of cardiovascular disease. Capillaroscopy is a non-invasive, relatively inexpensive methodology for directly visualizing the microcirculation. Percent capillary recruitment is assessed by dividing the increase in capillary density induced by postocclusive reactive hyperemia (postocclusive reactive hyperemia capillary density minus baseline capillary density), by the maximal capillary density (observed during passive venous occlusion). Percent perfused capillaries represents the proportion of all capillaries present that are perfused (functionally active), and is calculated by dividing postocclusive reactive hyperemia capillary density by the maximal capillary density. Both percent capillary recruitment and percent perfused capillaries reflect the number of functional capillaries. The forearm blood flow (FBF) technique provides accepted non-invasive measures of endothelial function: The ratio FBFmax/FBFbase is computed as an estimate of vasodilation, by dividing the mean of the four FBFmax values by the mean of the four FBFbase values. Forearm vascular resistance at maximal vasodilation (FVRmax) is calculated as the mean arterial pressure (MAP) divided by FBFmax. Both the capillaroscopy and forearm techniques are readily acceptable to patients and can be learned quickly. The microvascular and endothelial function measures obtained using the methodologies described in this paper may have future utility in clinical patient cardiovascular risk-reduction strategies. As we have published reports demonstrating that microvascular and endothelial dysfunction are found in initial stages of hypertension including prehypertension, microvascular and endothelial function measures may eventually aid in early identification, risk-stratification and prevention of end-stage vascular pathology, with its potentially fatal consequences.
Medicine, Issue 71, Anatomy, Physiology, Immunology, Pharmacology, Hematology, Diseases, Health Care, Life sciences, Microcirculation, endothelial dysfunction, capillary density, microvascular function, blood vessels, capillaries, capillary, venous occlusion, circulation, experimental therapeutics, capillaroscopy
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Play Button
Applications of EEG Neuroimaging Data: Event-related Potentials, Spectral Power, and Multiscale Entropy
Authors: Jennifer J. Heisz, Anthony R. McIntosh.
Institutions: Baycrest.
When considering human neuroimaging data, an appreciation of signal variability represents a fundamental innovation in the way we think about brain signal. Typically, researchers represent the brain's response as the mean across repeated experimental trials and disregard signal fluctuations over time as "noise". However, it is becoming clear that brain signal variability conveys meaningful functional information about neural network dynamics. This article describes the novel method of multiscale entropy (MSE) for quantifying brain signal variability. MSE may be particularly informative of neural network dynamics because it shows timescale dependence and sensitivity to linear and nonlinear dynamics in the data.
Neuroscience, Issue 76, Neurobiology, Anatomy, Physiology, Medicine, Biomedical Engineering, Electroencephalography, EEG, electroencephalogram, Multiscale entropy, sample entropy, MEG, neuroimaging, variability, noise, timescale, non-linear, brain signal, information theory, brain, imaging
Play Button
Movement Retraining using Real-time Feedback of Performance
Authors: Michael Anthony Hunt.
Institutions: University of British Columbia .
Any modification of movement - especially movement patterns that have been honed over a number of years - requires re-organization of the neuromuscular patterns responsible for governing the movement performance. This motor learning can be enhanced through a number of methods that are utilized in research and clinical settings alike. In general, verbal feedback of performance in real-time or knowledge of results following movement is commonly used clinically as a preliminary means of instilling motor learning. Depending on patient preference and learning style, visual feedback (e.g. through use of a mirror or different types of video) or proprioceptive guidance utilizing therapist touch, are used to supplement verbal instructions from the therapist. Indeed, a combination of these forms of feedback is commonplace in the clinical setting to facilitate motor learning and optimize outcomes. Laboratory-based, quantitative motion analysis has been a mainstay in research settings to provide accurate and objective analysis of a variety of movements in healthy and injured populations. While the actual mechanisms of capturing the movements may differ, all current motion analysis systems rely on the ability to track the movement of body segments and joints and to use established equations of motion to quantify key movement patterns. Due to limitations in acquisition and processing speed, analysis and description of the movements has traditionally occurred offline after completion of a given testing session. This paper will highlight a new supplement to standard motion analysis techniques that relies on the near instantaneous assessment and quantification of movement patterns and the display of specific movement characteristics to the patient during a movement analysis session. As a result, this novel technique can provide a new method of feedback delivery that has advantages over currently used feedback methods.
Medicine, Issue 71, Biophysics, Anatomy, Physiology, Physics, Biomedical Engineering, Behavior, Psychology, Kinesiology, Physical Therapy, Musculoskeletal System, Biofeedback, biomechanics, gait, movement, walking, rehabilitation, clinical, training
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Disrupting Reconsolidation of Fear Memory in Humans by a Noradrenergic β-Blocker
Authors: Merel Kindt, Marieke Soeter, Dieuwke Sevenster.
Institutions: University of Amsterdam.
The basic design used in our human fear-conditioning studies on disrupting reconsolidation includes testing over different phases across three consecutive days. On day 1 - the fear acquisition phase, healthy participants are exposed to a series of picture presentations. One picture stimulus (CS1+) is repeatedly paired with an aversive electric stimulus (US), resulting in the acquisition of a fear association, whereas another picture stimulus (CS2-) is never followed by an US. On day 2 - the memory reactivation phase, the participants are re-exposed to the conditioned stimulus without the US (CS1-), which typically triggers a conditioned fear response. After the memory reactivation we administer an oral dose of 40 mg of propranolol HCl, a β-adrenergic receptor antagonist that indirectly targets the protein synthesis required for reconsolidation by inhibiting the noradrenaline-stimulated CREB phosphorylation. On day 3 - the test phase, the participants are again exposed to the unreinforced conditioned stimuli (CS1- and CS2-) in order to measure the fear-reducing effect of the manipulation. This retention test is followed by an extinction procedure and the presentation of situational triggers to test for the return of fear. Potentiation of the eye blink startle reflex is measured as an index for conditioned fear responding. Declarative knowledge of the fear association is measured through online US expectancy ratings during each CS presentation. In contrast to extinction learning, disrupting reconsolidation targets the original fear memory thereby preventing the return of fear. Although the clinical applications are still in their infancy, disrupting reconsolidation of fear memory seems to be a promising new technique with the prospect to persistently dampen the expression of fear memory in patients suffering from anxiety disorders and other psychiatric disorders.
Behavior, Issue 94, Fear memory, reconsolidation, noradrenergic β-blocker, human fear conditioning, startle potentiation, translational research.
Play Button
Irrelevant Stimuli and Action Control: Analyzing the Influence of Ignored Stimuli via the Distractor-Response Binding Paradigm
Authors: Birte Moeller, Hartmut Schächinger, Christian Frings.
Institutions: Trier University, Trier University.
Selection tasks in which simple stimuli (e.g. letters) are presented and a target stimulus has to be selected against one or more distractor stimuli are frequently used in the research on human action control. One important question in these settings is how distractor stimuli, competing with the target stimulus for a response, influence actions. The distractor-response binding paradigm can be used to investigate this influence. It is particular useful to separately analyze response retrieval and distractor inhibition effects. Computer-based experiments are used to collect the data (reaction times and error rates). In a number of sequentially presented pairs of stimulus arrays (prime-probe design), participants respond to targets while ignoring distractor stimuli. Importantly, the factors response relation in the arrays of each pair (repetition vs. change) and distractor relation (repetition vs. change) are varied orthogonally. The repetition of the same distractor then has a different effect depending on response relation (repetition vs. change) between arrays. This result pattern can be explained by response retrieval due to distractor repetition. In addition, distractor inhibition effects are indicated by a general advantage due to distractor repetition. The described paradigm has proven useful to determine relevant parameters for response retrieval effects on human action.
Behavior, Issue 87, stimulus-response binding, distractor-response binding, response retrieval, distractor inhibition, event file, action control, selection task
Play Button
Exploring the Effects of Atmospheric Forcings on Evaporation: Experimental Integration of the Atmospheric Boundary Layer and Shallow Subsurface
Authors: Kathleen Smits, Victoria Eagen, Andrew Trautz.
Institutions: Colorado School of Mines.
Evaporation is directly influenced by the interactions between the atmosphere, land surface and soil subsurface. This work aims to experimentally study evaporation under various surface boundary conditions to improve our current understanding and characterization of this multiphase phenomenon as well as to validate numerical heat and mass transfer theories that couple Navier-Stokes flow in the atmosphere and Darcian flow in the porous media. Experimental data were collected using a unique soil tank apparatus interfaced with a small climate controlled wind tunnel. The experimental apparatus was instrumented with a suite of state of the art sensor technologies for the continuous and autonomous collection of soil moisture, soil thermal properties, soil and air temperature, relative humidity, and wind speed. This experimental apparatus can be used to generate data under well controlled boundary conditions, allowing for better control and gathering of accurate data at scales of interest not feasible in the field. Induced airflow at several distinct wind speeds over the soil surface resulted in unique behavior of heat and mass transfer during the different evaporative stages.
Environmental Sciences, Issue 100, Bare-soil evaporation, Land-atmosphere interactions, Heat and mass flux, Porous media, Wind tunnel, Soil thermal properties, Multiphase flow
Play Button
Investigating the Function of Deep Cortical and Subcortical Structures Using Stereotactic Electroencephalography: Lessons from the Anterior Cingulate Cortex
Authors: Robert A. McGovern, Tarini Ratneswaren, Elliot H. Smith, Jennifer F. Russo, Amy C. Jongeling, Lisa M. Bateman, Catherine A. Schevon, Neil A. Feldstein, Guy M. McKhann, II, Sameer Sheth.
Institutions: Columbia University Medical Center, New York Presbyterian Hospital, Columbia University Medical Center, New York Presbyterian Hospital, Columbia University Medical Center, New York Presbyterian Hospital, King's College London.
Stereotactic Electroencephalography (SEEG) is a technique used to localize seizure foci in patients with medically intractable epilepsy. This procedure involves the chronic placement of multiple depth electrodes into regions of the brain typically inaccessible via subdural grid electrode placement. SEEG thus provides a unique opportunity to investigate brain function. In this paper we demonstrate how SEEG can be used to investigate the role of the dorsal anterior cingulate cortex (dACC) in cognitive control. We include a description of the SEEG procedure, demonstrating the surgical placement of the electrodes. We describe the components and process required to record local field potential (LFP) data from consenting subjects while they are engaged in a behavioral task. In the example provided, subjects play a cognitive interference task, and we demonstrate how signals are recorded and analyzed from electrodes in the dorsal anterior cingulate cortex, an area intimately involved in decision-making. We conclude with further suggestions of ways in which this method can be used for investigating human cognitive processes.
Neuroscience, Issue 98, epilepsy, stereotactic electroencephalography, anterior cingulate cortex, local field potential, electrode placement
Play Button
Designing and Implementing Nervous System Simulations on LEGO Robots
Authors: Daniel Blustein, Nikolai Rosenthal, Joseph Ayers.
Institutions: Northeastern University, Bremen University of Applied Sciences.
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.1 The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum.
Neuroscience, Issue 75, Neurobiology, Bioengineering, Behavior, Mechanical Engineering, Computer Science, Marine Biology, Biomimetics, Marine Science, Neurosciences, Synthetic Biology, Robotics, robots, Modeling, models, Sensory Fusion, nervous system, Educational Tools, programming, software, lobster, Homarus americanus, animal model
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.