JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Socio-demographic variation in smoking habits: Italy, 2008.
Prev Med
PUBLISHED: 01-23-2009
To provide updated information on smoking prevalence in Italy, with a focus on demographic and socio-economic characteristics.
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Published: 08-06-2013
ABSTRACT
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
22 Related JoVE Articles!
Play Button
Handwriting Analysis Indicates Spontaneous Dyskinesias in Neuroleptic Naïve Adolescents at High Risk for Psychosis
Authors: Derek J. Dean, Hans-Leo Teulings, Michael Caligiuri, Vijay A. Mittal.
Institutions: University of Colorado Boulder, NeuroScript LLC, University of California, San Diego.
Growing evidence suggests that movement abnormalities are a core feature of psychosis. One marker of movement abnormality, dyskinesia, is a result of impaired neuromodulation of dopamine in fronto-striatal pathways. The traditional methods for identifying movement abnormalities include observer-based reports and force stability gauges. The drawbacks of these methods are long training times for raters, experimenter bias, large site differences in instrumental apparatus, and suboptimal reliability. Taking these drawbacks into account has guided the development of better standardized and more efficient procedures to examine movement abnormalities through handwriting analysis software and tablet. Individuals at risk for psychosis showed significantly more dysfluent pen movements (a proximal measure for dyskinesia) in a handwriting task. Handwriting kinematics offers a great advance over previous methods of assessing dyskinesia, which could clearly be beneficial for understanding the etiology of psychosis.
Behavior, Issue 81, Schizophrenia, Disorders with Psychotic Features, Psychology, Clinical, Psychopathology, behavioral sciences, Movement abnormalities, Ultra High Risk, psychosis, handwriting, computer tablet, dyskinesia
50852
Play Button
Fundus Photography as a Convenient Tool to Study Microvascular Responses to Cardiovascular Disease Risk Factors in Epidemiological Studies
Authors: Patrick De Boever, Tijs Louwies, Eline Provost, Luc Int Panis, Tim S. Nawrot.
Institutions: Flemish Institute for Technological Research (VITO), Hasselt University, Hasselt University, Leuven University.
The microcirculation consists of blood vessels with diameters less than 150 µm. It makes up a large part of the circulatory system and plays an important role in maintaining cardiovascular health. The retina is a tissue that lines the interior of the eye and it is the only tissue that allows for a non-invasive analysis of the microvasculature. Nowadays, high-quality fundus images can be acquired using digital cameras. Retinal images can be collected in 5 min or less, even without dilatation of the pupils. This unobtrusive and fast procedure for visualizing the microcirculation is attractive to apply in epidemiological studies and to monitor cardiovascular health from early age up to old age. Systemic diseases that affect the circulation can result in progressive morphological changes in the retinal vasculature. For example, changes in the vessel calibers of retinal arteries and veins have been associated with hypertension, atherosclerosis, and increased risk of stroke and myocardial infarction. The vessel widths are derived using image analysis software and the width of the six largest arteries and veins are summarized in the Central Retinal Arteriolar Equivalent (CRAE) and the Central Retinal Venular Equivalent (CRVE). The latter features have been shown useful to study the impact of modifiable lifestyle and environmental cardiovascular disease risk factors. The procedures to acquire fundus images and the analysis steps to obtain CRAE and CRVE are described. Coefficients of variation of repeated measures of CRAE and CRVE are less than 2% and within-rater reliability is very high. Using a panel study, the rapid response of the retinal vessel calibers to short-term changes in particulate air pollution, a known risk factor for cardiovascular mortality and morbidity, is reported. In conclusion, retinal imaging is proposed as a convenient and instrumental tool for epidemiological studies to study microvascular responses to cardiovascular disease risk factors.
Medicine, Issue 92, retina, microvasculature, image analysis, Central Retinal Arteriolar Equivalent, Central Retinal Venular Equivalent, air pollution, particulate matter, black carbon
51904
Play Button
Fast and Accurate Exhaled Breath Ammonia Measurement
Authors: Steven F. Solga, Matthew L. Mudalel, Lisa A. Spacek, Terence H. Risby.
Institutions: St. Luke's University Hospital, Johns Hopkins School of Medicine, Johns Hopkins University.
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Medicine, Issue 88, Breath, ammonia, breath measurement, breath analysis, QEPAS, volatile organic compound
51658
Play Button
A Simplified Technique for In situ Excision of Cornea and Evisceration of Retinal Tissue from Human Ocular Globe
Authors: Mohit Parekh, Stefano Ferrari, Enzo Di Iorio, Vanessa Barbaro, Davide Camposampiero, Marianthi Karali, Diego Ponzin, Gianni Salvalaio.
Institutions: Fondazione Banca Degli Occhi del Veneto O.N.L.U.S. , Telethon Institute for Genetics & Medicine (T.I.G.E.M.).
Enucleation is the process of retrieving the ocular globe from a cadaveric donor leaving the rest of the globe undisturbed. Excision refers to the retrieval of ocular tissues, especially cornea, by cutting it separate from the ocular globe. Evisceration is the process of removing the internal organs referred here as retina. The ocular globe consists of the cornea, the sclera, the vitreous body, the lens, the iris, the retina, the choroid, muscles etc (Suppl. Figure 1). When a patient is suffering from corneal damage, the cornea needs to be removed and a healthy one must be transplanted by keratoplastic surgeries. Genetic disorders or defects in retinal function can compromise vision. Human ocular globes can be used for various surgical procedures such as eye banking, transplantation of human cornea or sclera and research on ocular tissues. However, there is little information available on human corneal and retinal excision, probably due to the limited accessibility to human tissues. Most of the studies describing similar procedures are performed on animal models. Research scientists rely on the availability of properly dissected and well-conserved ocular tissues in order to extend the knowledge on human eye development, homeostasis and function. As we receive high amount of ocular globes out of which approximately 40% (Table 1) of them are used for research purposes, we are able to perform huge amount of experiments on these tissues, defining techniques to excise and preserve them regularly. The cornea is an avascular tissue which enables the transmission of light onto the retina and for this purpose should always maintain a good degree of transparency. Within the cornea, the limbus region, which is a reservoir of the stem cells, helps the reconstruction of epithelial cells and restricts the overgrowth of the conjunctiva maintaining corneal transparency and clarity. The size and thickness of the cornea are critical for clear vision, as changes in either of them could lead to distracted, unclear vision. The cornea comprises of 5 layers; a) epithelium, b) Bowman's layer, c) stroma, d) Descemet's membrane and e) endothelium. All layers should function properly to ensure clear vision4,5,6. The choroid is the intermediate tunic between the sclera and retina, bounded on the interior by the Bruch's membrane and is responsible for blood flow in the eye. The choroid also helps to regulate the temperature and supplies nourishment to the outer layers of the retina5,6. The retina is a layer of nervous tissue that covers the back of the ocular globe (Suppl. Figure 1) and consists of two parts: a photoreceptive part and a non-receptive part. The retina helps to receive the light from the cornea and lens and converts it into the chemical energy eventually transmitted to the brain with help of the optic nerve5,6. The aim of this paper is to provide a protocol for the dissection of corneal and retinal tissues from human ocular globes. Avoiding cross-contamination with adjacent tissues and preserving RNA integrity is of fundamental importance as such tissues are indispensable for research purposes aimed at (i) characterizing the transcriptome of the ocular tissues, (ii) isolating stem cells for regenerative medicine projects, and (iii) evaluating histological differences between tissues from normal/affected subjects. In this paper we describe the technique we currently use to remove the cornea, the choroid and retinal tissues from an ocular globe. Here we provide a detailed protocol for the dissection of the human ocular globe and the excision of corneal and retinal tissues. The accompanying video will help researchers to learn an appropriate technique for the retrieval of precious human tissues which are difficult to find regularly.
Medicine, Issue 64, Physiology, Human cadaver ocular globe, in situ excision, corneal tissue, in situ evisceration, retinal tissue
3765
Play Button
Simultaneous EEG Monitoring During Transcranial Direct Current Stimulation
Authors: Pedro Schestatsky, Leon Morales-Quezada, Felipe Fregni.
Institutions: Universidade Federal do Rio Grande do Sul, Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), Harvard Medical School, De Montfort University.
Transcranial direct current stimulation (tDCS) is a technique that delivers weak electric currents through the scalp. This constant electric current induces shifts in neuronal membrane excitability, resulting in secondary changes in cortical activity. Although tDCS has most of its neuromodulatory effects on the underlying cortex, tDCS effects can also be observed in distant neural networks. Therefore, concomitant EEG monitoring of the effects of tDCS can provide valuable information on the mechanisms of tDCS. In addition, EEG findings can be an important surrogate marker for the effects of tDCS and thus can be used to optimize its parameters. This combined EEG-tDCS system can also be used for preventive treatment of neurological conditions characterized by abnormal peaks of cortical excitability, such as seizures. Such a system would be the basis of a non-invasive closed-loop device. In this article, we present a novel device that is capable of utilizing tDCS and EEG simultaneously. For that, we describe in a step-by-step fashion the main procedures of the application of this device using schematic figures, tables and video demonstrations. Additionally, we provide a literature review on clinical uses of tDCS and its cortical effects measured by EEG techniques.
Behavior, Issue 76, Medicine, Neuroscience, Neurobiology, Anatomy, Physiology, Biomedical Engineering, Psychology, electroencephalography, electroencephalogram, EEG, transcranial direct current stimulation, tDCS, noninvasive brain stimulation, neuromodulation, closed-loop system, brain, imaging, clinical techniques
50426
Play Button
Eye Tracking, Cortisol, and a Sleep vs. Wake Consolidation Delay: Combining Methods to Uncover an Interactive Effect of Sleep and Cortisol on Memory
Authors: Kelly A. Bennion, Katherine R. Mickley Steinmetz, Elizabeth A. Kensinger, Jessica D. Payne.
Institutions: Boston College, Wofford College, University of Notre Dame.
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.
Behavior, Issue 88, attention, consolidation, cortisol, emotion, encoding, glucocorticoids, memory, sleep, stress
51500
Play Button
Assessment and Evaluation of the High Risk Neonate: The NICU Network Neurobehavioral Scale
Authors: Barry M. Lester, Lynne Andreozzi-Fontaine, Edward Tronick, Rosemarie Bigsby.
Institutions: Brown University, Women & Infants Hospital of Rhode Island, University of Massachusetts, Boston.
There has been a long-standing interest in the assessment of the neurobehavioral integrity of the newborn infant. The NICU Network Neurobehavioral Scale (NNNS) was developed as an assessment for the at-risk infant. These are infants who are at increased risk for poor developmental outcome because of insults during prenatal development, such as substance exposure or prematurity or factors such as poverty, poor nutrition or lack of prenatal care that can have adverse effects on the intrauterine environment and affect the developing fetus. The NNNS assesses the full range of infant neurobehavioral performance including neurological integrity, behavioral functioning, and signs of stress/abstinence. The NNNS is a noninvasive neonatal assessment tool with demonstrated validity as a predictor, not only of medical outcomes such as cerebral palsy diagnosis, neurological abnormalities, and diseases with risks to the brain, but also of developmental outcomes such as mental and motor functioning, behavior problems, school readiness, and IQ. The NNNS can identify infants at high risk for abnormal developmental outcome and is an important clinical tool that enables medical researchers and health practitioners to identify these infants and develop intervention programs to optimize the development of these infants as early as possible. The video shows the NNNS procedures, shows examples of normal and abnormal performance and the various clinical populations in which the exam can be used.
Behavior, Issue 90, NICU Network Neurobehavioral Scale, NNNS, High risk infant, Assessment, Evaluation, Prediction, Long term outcome
3368
Play Button
Isolation of Mouse Respiratory Epithelial Cells and Exposure to Experimental Cigarette Smoke at Air Liquid Interface
Authors: Hilaire C. Lam, Augustine M.K. Choi, Stefan W. Ryter.
Institutions: Harvard Medical School, University of Pittsburgh.
Pulmonary epithelial cells can be isolated from the respiratory tract of mice and cultured at air-liquid interface (ALI) as a model of differentiated respiratory epithelium. A protocol is described for isolating and exposing these cells to mainstream cigarette smoke (CS), in order to study epithelial cell responses to CS exposure. The protocol consists of three parts: the isolation of airway epithelial cells from mouse trachea, the culturing of these cells at air-liquid interface (ALI) as fully differentiated epithelial cells, and the delivery of calibrated mainstream CS to these cells in culture. The ALI culture system allows the culture of respiratory epithelia under conditions that more closely resemble their physiological setting than ordinary liquid culture systems. The study of molecular and lung cellular responses to CS exposure is a critical component of understanding the impact of environmental air pollution on human health. Research findings in this area may ultimately contribute towards understanding the etiology of chronic obstructive pulmonary disease (COPD), and other tobacco-related diseases, which represent major global health problems.
Medicine, Issue 48, Air-Liquid Interface, Cell isolation, Cigarette smoke, Epithelial cells
2513
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
A Multi-Modal Approach to Assessing Recovery in Youth Athletes Following Concussion
Authors: Nick Reed, James Murphy, Talia Dick, Katie Mah, Melissa Paniccia, Lee Verweel, Danielle Dobney, Michelle Keightley.
Institutions: Holland Bloorview Kids Rehabilitation Hospital, University of Toronto, University of Toronto.
Concussion is one of the most commonly reported injuries amongst children and youth involved in sport participation. Following a concussion, youth can experience a range of short and long term neurobehavioral symptoms (somatic, cognitive and emotional/behavioral) that can have a significant impact on one’s participation in daily activities and pursuits of interest (e.g., school, sports, work, family/social life, etc.). Despite this, there remains a paucity in clinically driven research aimed specifically at exploring concussion within the youth sport population, and more specifically, multi-modal approaches to measuring recovery. This article provides an overview of a novel and multi-modal approach to measuring recovery amongst youth athletes following concussion. The presented approach involves the use of both pre-injury/baseline testing and post-injury/follow-up testing to assess performance across a wide variety of domains (post-concussion symptoms, cognition, balance, strength, agility/motor skills and resting state heart rate variability). The goal of this research is to gain a more objective and accurate understanding of recovery following concussion in youth athletes (ages 10-18 years). Findings from this research can help to inform the development and use of improved approaches to concussion management and rehabilitation specific to the youth sport community.
Medicine, Issue 91, concussion, children, youth, athletes, assessment, management, rehabilitation
51892
Play Button
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Authors: Erin C. Connors, Lindsay A. Yazzolino, Jaime Sánchez, Lotfi B. Merabet.
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
50272
Play Button
Development of a Virtual Reality Assessment of Everyday Living Skills
Authors: Stacy A. Ruse, Vicki G. Davis, Alexandra S. Atkins, K. Ranga R. Krishnan, Kolleen H. Fox, Philip D. Harvey, Richard S.E. Keefe.
Institutions: NeuroCog Trials, Inc., Duke-NUS Graduate Medical Center, Duke University Medical Center, Fox Evaluation and Consulting, PLLC, University of Miami Miller School of Medicine.
Cognitive impairments affect the majority of patients with schizophrenia and these impairments predict poor long term psychosocial outcomes.  Treatment studies aimed at cognitive impairment in patients with schizophrenia not only require demonstration of improvements on cognitive tests, but also evidence that any cognitive changes lead to clinically meaningful improvements.  Measures of “functional capacity” index the extent to which individuals have the potential to perform skills required for real world functioning.  Current data do not support the recommendation of any single instrument for measurement of functional capacity.  The Virtual Reality Functional Capacity Assessment Tool (VRFCAT) is a novel, interactive gaming based measure of functional capacity that uses a realistic simulated environment to recreate routine activities of daily living. Studies are currently underway to evaluate and establish the VRFCAT’s sensitivity, reliability, validity, and practicality. This new measure of functional capacity is practical, relevant, easy to use, and has several features that improve validity and sensitivity of measurement of function in clinical trials of patients with CNS disorders.
Behavior, Issue 86, Virtual Reality, Cognitive Assessment, Functional Capacity, Computer Based Assessment, Schizophrenia, Neuropsychology, Aging, Dementia
51405
Play Button
Ultrasound Assessment of Endothelial-Dependent Flow-Mediated Vasodilation of the Brachial Artery in Clinical Research
Authors: Hugh Alley, Christopher D. Owens, Warren J. Gasper, S. Marlene Grenon.
Institutions: University of California, San Francisco, Veterans Affairs Medical Center, San Francisco, Veterans Affairs Medical Center, San Francisco.
The vascular endothelium is a monolayer of cells that cover the interior of blood vessels and provide both structural and functional roles. The endothelium acts as a barrier, preventing leukocyte adhesion and aggregation, as well as controlling permeability to plasma components. Functionally, the endothelium affects vessel tone. Endothelial dysfunction is an imbalance between the chemical species which regulate vessel tone, thombroresistance, cellular proliferation and mitosis. It is the first step in atherosclerosis and is associated with coronary artery disease, peripheral artery disease, heart failure, hypertension, and hyperlipidemia. The first demonstration of endothelial dysfunction involved direct infusion of acetylcholine and quantitative coronary angiography. Acetylcholine binds to muscarinic receptors on the endothelial cell surface, leading to an increase of intracellular calcium and increased nitric oxide (NO) production. In subjects with an intact endothelium, vasodilation was observed while subjects with endothelial damage experienced paradoxical vasoconstriction. There exists a non-invasive, in vivo method for measuring endothelial function in peripheral arteries using high-resolution B-mode ultrasound. The endothelial function of peripheral arteries is closely related to coronary artery function. This technique measures the percent diameter change in the brachial artery during a period of reactive hyperemia following limb ischemia. This technique, known as endothelium-dependent, flow-mediated vasodilation (FMD) has value in clinical research settings. However, a number of physiological and technical issues can affect the accuracy of the results and appropriate guidelines for the technique have been published. Despite the guidelines, FMD remains heavily operator dependent and presents a steep learning curve. This article presents a standardized method for measuring FMD in the brachial artery on the upper arm and offers suggestions to reduce intra-operator variability.
Medicine, Issue 92, endothelial function, endothelial dysfunction, brachial artery, peripheral artery disease, ultrasound, vascular, endothelium, cardiovascular disease.
52070
Play Button
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
51617
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
The use of Biofeedback in Clinical Virtual Reality: The INTREPID Project
Authors: Claudia Repetto, Alessandra Gorini, Cinzia Vigna, Davide Algeri, Federica Pallavicini, Giuseppe Riva.
Institutions: Istituto Auxologico Italiano, Università Cattolica del Sacro Cuore.
Generalized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioral treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. The INTREPID project is aimed to implement a new instrument to treat anxiety-related disorders and to test its clinical efficacy in reducing anxiety-related symptoms. The innovation of this approach is the combination of virtual reality and biofeedback, so that the first one is directly modified by the output of the second one. In this way, the patient is made aware of his or her reactions through the modification of some features of the VR environment in real time. Using mental exercises the patient learns to control these physiological parameters and using the feedback provided by the virtual environment is able to gauge his or her success. The supplemental use of portable devices, such as PDA or smart-phones, allows the patient to perform at home, individually and autonomously, the same exercises experienced in therapist's office. The goal is to anchor the learned protocol in a real life context, so enhancing the patients' ability to deal with their symptoms. The expected result is a better and faster learning of relaxation techniques, and thus an increased effectiveness of the treatment if compared with traditional clinical protocols.
Neuroscience, Issue 33, virtual reality, biofeedback, generalized anxiety disorder, Intrepid, cybertherapy, cyberpsychology
1554
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
A Protocol for Detecting and Scavenging Gas-phase Free Radicals in Mainstream Cigarette Smoke
Authors: Long-Xi Yu, Boris G. Dzikovski, Jack H. Freed.
Institutions: CDCF-AOX Lab, Cornell University.
Cigarette smoking is associated with human cancers. It has been reported that most of the lung cancer deaths are caused by cigarette smoking 5,6,7,12. Although tobacco tars and related products in the particle phase of cigarette smoke are major causes of carcinogenic and mutagenic related diseases, cigarette smoke contains significant amounts of free radicals that are also considered as an important group of carcinogens9,10. Free radicals attack cell constituents by damaging protein structure, lipids and DNA sequences and increase the risks of developing various types of cancers. Inhaled radicals produce adducts that contribute to many of the negative health effects of tobacco smoke in the lung3. Studies have been conducted to reduce free radicals in cigarette smoke to decrease risks of the smoking-induced damage. It has been reported that haemoglobin and heme-containing compounds could partially scavenge nitric oxide, reactive oxidants and carcinogenic volatile nitrosocompounds of cigarette smoke4. A 'bio-filter' consisted of haemoglobin and activated carbon was used to scavenge the free radicals and to remove up to 90% of the free radicals from cigarette smoke14. However, due to the cost-ineffectiveness, it has not been successfully commercialized. Another study showed good scavenging efficiency of shikonin, a component of Chinese herbal medicine8. In the present study, we report a protocol for introducing common natural antioxidant extracts into the cigarette filter for scavenging gas phase free radicals in cigarette smoke and measurement of the scavenge effect on gas phase free radicals in mainstream cigarette smoke (MCS) using spin-trapping Electron Spin Resonance (ESR) Spectroscopy1,2,14. We showed high scavenging capacity of lycopene and grape seed extract which could point to their future application in cigarette filters. An important advantage of these prospective scavengers is that they can be obtained in large quantities from byproducts of tomato or wine industry respectively11,13
Bioengineering, Issue 59, Cigarette smoke, free radical, spin-trap, ESR
3406
Play Button
Improving IV Insulin Administration in a Community Hospital
Authors: Michael C. Magee.
Institutions: Wyoming Medical Center.
Diabetes mellitus is a major independent risk factor for increased morbidity and mortality in the hospitalized patient, and elevated blood glucose concentrations, even in non-diabetic patients, predicts poor outcomes.1-4 The 2008 consensus statement by the American Association of Clinical Endocrinologists (AACE) and the American Diabetes Association (ADA) states that "hyperglycemia in hospitalized patients, irrespective of its cause, is unequivocally associated with adverse outcomes."5 It is important to recognize that hyperglycemia occurs in patients with known or undiagnosed diabetes as well as during acute illness in those with previously normal glucose tolerance. The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study involved over six thousand adult intensive care unit (ICU) patients who were randomized to intensive glucose control or conventional glucose control.6 Surprisingly, this trial found that intensive glucose control increased the risk of mortality by 14% (odds ratio, 1.14; p=0.02). In addition, there was an increased prevalence of severe hypoglycemia in the intensive control group compared with the conventional control group (6.8% vs. 0.5%, respectively; p<0.001). From this pivotal trial and two others,7,8 Wyoming Medical Center (WMC) realized the importance of controlling hyperglycemia in the hospitalized patient while avoiding the negative impact of resultant hypoglycemia. Despite multiple revisions of an IV insulin paper protocol, analysis of data from usage of the paper protocol at WMC shows that in terms of achieving normoglycemia while minimizing hypoglycemia, results were suboptimal. Therefore, through a systematical implementation plan, monitoring of patient blood glucose levels was switched from using a paper IV insulin protocol to a computerized glucose management system. By comparing blood glucose levels using the paper protocol to that of the computerized system, it was determined, that overall, the computerized glucose management system resulted in more rapid and tighter glucose control than the traditional paper protocol. Specifically, a substantial increase in the time spent within the target blood glucose concentration range, as well as a decrease in the prevalence of severe hypoglycemia (BG < 40 mg/dL), clinical hypoglycemia (BG < 70 mg/dL), and hyperglycemia (BG > 180 mg/dL), was witnessed in the first five months after implementation of the computerized glucose management system. The computerized system achieved target concentrations in greater than 75% of all readings while minimizing the risk of hypoglycemia. The prevalence of hypoglycemia (BG < 70 mg/dL) with the use of the computer glucose management system was well under 1%.
Medicine, Issue 64, Physiology, Computerized glucose management, Endotool, hypoglycemia, hyperglycemia, diabetes, IV insulin, paper protocol, glucose control
3705
Play Button
Combining Behavioral Endocrinology and Experimental Economics: Testosterone and Social Decision Making
Authors: Christoph Eisenegger, Michael Naef.
Institutions: University of Zurich, Royal Holloway, University of London.
Behavioral endocrinological research in humans as well as in animals suggests that testosterone plays a key role in social interactions. Studies in rodents have shown a direct link between testosterone and aggressive behavior1 and folk wisdom adapts these findings to humans, suggesting that testosterone induces antisocial, egoistic or even aggressive behavior2. However, many researchers doubt a direct testosterone-aggression link in humans, arguing instead that testosterone is primarily involved in status-related behavior3,4. As a high status can also be achieved by aggressive and antisocial means it can be difficult to distinguish between anti-social and status seeking behavior. We therefore set up an experimental environment, in which status can only be achieved by prosocial means. In a double-blind and placebo-controlled experiment, we administered a single sublingual dose of 0.5 mg of testosterone (with a hydroxypropyl-β-cyclodextrin carrier) to 121 women and investigated their social interaction behavior in an economic bargaining paradigm. Real monetary incentives are at stake in this paradigm; every player A receives a certain amount of money and has to make an offer to another player B on how to share the money. If B accepts, she gets what was offered and player A keeps the rest. If B refuses the offer, nobody gets anything. A status seeking player A is expected to avoid being rejected by behaving in a prosocial way, i.e. by making higher offers. The results show that if expectations about the hormone are controlled for, testosterone administration leads to a significant increase in fair bargaining offers compared to placebo. The role of expectations is reflected in the fact that subjects who report that they believe to have received testosterone make lower offers than those who say they believe that they were treated with a placebo. These findings suggest that the experimental economics approach is sensitive for detecting neurobiological effects as subtle as those achieved by administration of hormones. Moreover, the findings point towards the importance of both psychosocial as well as neuroendocrine factors in determining the influence of testosterone on human social behavior.
Neuroscience, Issue 49, behavioral endocrinology, testosterone, social status, decision making
2065
Play Button
Laparoscopic Left Liver Sectoriectomy of Caroli's Disease Limited to Segment II and III
Authors: Luigi Boni, Gianlorenzo Dionigi, Francesca Rovera, Matteo Di Giuseppe.
Institutions: University of Insubria, University of Insubria.
Caroli's disease is defined as a abnormal dilatation of the intra-hepatica bile ducts: Its incidence is extremely low (1 in 1,000,000 population) and in most of the cases the whole liver is interested and liver transplantation is the treatment of choice. In case of dilatation limited to the left or right lobe, liver resection can be performed. For many year the standard approach for liver resection has been a formal laparotomy by means of a large incision of abdomen that is characterized by significant post-operatie morbidity. More recently, minimally invasive, laparoscopic approach has been proposed as possible surgical technique for liver resection both for benign and malignant diseases. The main benefits of the minimally invasive approach is represented by a significant reduction of the surgical trauma that allows a faster recovery a less post-operative complications. This video shows a case of Caroli s disease occured in a 58 years old male admitted at the gastroenterology department for sudden onset of abdominal pain associated with fever (>38C° ), nausea and shivering. Abdominal ultrasound demonstrated a significant dilatation of intra-hepatic left sited bile ducts with no evidences of gallbladder or common bile duct stones. Such findings were confirmed abdominal high resolution computer tomography. Laparoscopic left sectoriectomy was planned. Five trocars and 30° optic was used, exploration of the abdominal cavity showed no adhesions or evidences of other diseases. In order to control blood inflow to the liver, vascular clamp was placed on the hepatic pedicle (Pringle s manouvre), Parenchymal division is carried out with a combined use of 5 mm bipolar forceps and 5 mm ultrasonic dissector. A severely dilated left hepatic duct was isolated and divided using a 45mm endoscopic vascular stapler. Liver dissection was continued up to isolation of the main left portal branch that was then divided with a further cartridge of 45 mm vascular stapler. At his point the left liver remains attached only by the left hepatic vein: division of the triangular ligament was performed using monopolar hook and the hepatic vein isolated and the divided using vascular stapler. Haemostatis was refined by application of argon beam coagulation and no bleeding was revealed even after removal of the vascular clamp (total Pringle s time 27 minutes). Postoperative course was uneventful, minimal elevation of the liver function tests was recorded in post-operative day 1 but returned to normal at discharged on post-operative day 3.
Medicine, Issue 24, Laparoscopy, Liver resection, Caroli's disease, Left sectoriectomy
1118
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.