JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Spatial and temporal pattern of Rift Valley fever outbreaks in Tanzania; 1930 to 2007.
PLoS ONE
PUBLISHED: 01-01-2014
Rift Valley fever (RVF)-like disease was first reported in Tanzania more than eight decades ago and the last large outbreak of the disease occurred in 2006-07. This study investigates the spatial and temporal pattern of RVF outbreaks in Tanzania over the past 80 years in order to guide prevention and control strategies.
Authors: Birte Kalveram, Olga Lihoradova, Sabarish V. Indran, Tetsuro Ikegami.
Published: 11-01-2011
ABSTRACT
Rift Valley fever virus (RVFV), which causes hemorrhagic fever, neurological disorders or blindness in humans, and a high rate abortion and fetal malformation in ruminants1, has been classified as a HHS/USDA overlap select agent and a risk group 3 pathogen. It belongs to the genus Phlebovirus in the family Bunyaviridae and is one of the most virulent members of this family. Several reverse genetics systems for the RVFV MP-12 vaccine strain2,3 as well as wild-type RVFV strains 4-6, including ZH548 and ZH501, have been developed since 2006. The MP-12 strain (which is a risk group 2 pathogen and a non-select agent) is highly attenuated by several mutations in its M- and L-segments, but still carries virulent S-segment RNA3, which encodes a functional virulence factor, NSs. The rMP12-C13type (C13type) carrying 69% in-frame deletion of NSs ORF lacks all the known NSs functions, while it replicates as efficient as does MP-12 in VeroE6 cells lacking type-I IFN. NSs induces a shut-off of host transcription including interferon (IFN)-beta mRNA7,8 and promotes degradation of double-stranded RNA-dependent protein kinase (PKR) at the post-translational level.9,10 IFN-beta is transcriptionally upregulated by interferon regulatory factor 3 (IRF-3), NF-kB and activator protein-1 (AP-1), and the binding of IFN-beta to IFN-alpha/beta receptor (IFNAR) stimulates the transcription of IFN-alpha genes or other interferon stimulated genes (ISGs)11, which induces host antiviral activities, whereas host transcription suppression including IFN-beta gene by NSs prevents the gene upregulations of those ISGs in response to viral replication although IRF-3, NF-kB and activator protein-1 (AP-1) can be activated by RVFV7. . Thus, NSs is an excellent target to further attenuate MP-12, and to enhance host innate immune responses by abolishing the IFN-beta suppression function. Here, we describe a protocol for generating a recombinant MP-12 encoding mutated NSs, and provide an example of a screening method to identify NSs mutants lacking the function to suppress IFN-beta mRNA synthesis. In addition to its essential role in innate immunity, type-I IFN is important for the maturation of dendritic cells and the induction of an adaptive immune response12-14. Thus, NSs mutants inducing type-I IFN are further attenuated, but at the same time are more efficient at stimulating host immune responses than wild-type MP-12, which makes them ideal candidates for vaccination approaches.
17 Related JoVE Articles!
Play Button
Using Click Chemistry to Measure the Effect of Viral Infection on Host-Cell RNA Synthesis
Authors: Birte Kalveram, Olga Lihoradova, Sabarish V. Indran, Jennifer A. Head, Tetsuro Ikegami.
Institutions: University of Texas Medical Branch.
Many RNA viruses have evolved the ability to inhibit host cell transcription as a means to circumvent cellular defenses. For the study of these viruses, it is therefore important to have a quick and reliable way of measuring transcriptional activity in infected cells. Traditionally, transcription has been measured either by incorporation of radioactive nucleosides such as 3H-uridine followed by detection via autoradiography or scintillation counting, or incorporation of halogenated uridine analogs such as 5-bromouridine (BrU) followed by detection via immunostaining. The use of radioactive isotopes, however, requires specialized equipment and is not feasible in a number of laboratory settings, while the detection of BrU can be cumbersome and may suffer from low sensitivity. The recently developed click chemistry, which involves a copper-catalyzed triazole formation from an azide and an alkyne, now provides a rapid and highly sensitive alternative to these two methods. Click chemistry is a two step process in which nascent RNA is first labeled by incorporation of the uridine analog 5-ethynyluridine (EU), followed by detection of the label with a fluorescent azide. These azides are available as several different fluorophores, allowing for a wide range of options for visualization. This protocol describes a method to measure transcriptional suppression in cells infected with the Rift Valley fever virus (RVFV) strain MP-12 using click chemistry. Concurrently, expression of viral proteins in these cells is determined by classical intracellular immunostaining. Steps 1 through 4 detail a method to visualize transcriptional suppression via fluorescence microscopy, while steps 5 through 8 detail a method to quantify transcriptional suppression via flow cytometry. This protocol is easily adaptable for use with other viruses.
Immunology, Issue 78, Virology, Chemistry, Infectious Diseases, Biochemistry, Genetics, Molecular Biology, Cellular Biology, Medicine, Biomedical Engineering, Arboviruses, Bunyaviridae, RNA, Nuclear, Transcription, Genetic, Rift Valley fever virus, NSs, transcription, click chemistry, MP-12, fluorescence microscopy, flow cytometry, virus, proteins, immunostaining, assay
50809
Play Button
Monitoring Activation of the Antiviral Pattern Recognition Receptors RIG-I And PKR By Limited Protease Digestion and Native PAGE
Authors: Michaela Weber, Friedemann Weber.
Institutions: Philipps-University Marburg.
Host defenses to virus infection are dependent on a rapid detection by pattern recognition receptors (PRRs) of the innate immune system. In the cytoplasm, the PRRs RIG-I and PKR bind to specific viral RNA ligands. This first mediates conformational switching and oligomerization, and then enables activation of an antiviral interferon response. While methods to measure antiviral host gene expression are well established, methods to directly monitor the activation states of RIG-I and PKR are only partially and less well established. Here, we describe two methods to monitor RIG-I and PKR stimulation upon infection with an established interferon inducer, the Rift Valley fever virus mutant clone 13 (Cl 13). Limited trypsin digestion allows to analyze alterations in protease sensitivity, indicating conformational changes of the PRRs. Trypsin digestion of lysates from mock infected cells results in a rapid degradation of RIG-I and PKR, whereas Cl 13 infection leads to the emergence of a protease-resistant RIG-I fragment. Also PKR shows a virus-induced partial resistance to trypsin digestion, which coincides with its hallmark phosphorylation at Thr 446. The formation of RIG-I and PKR oligomers was validated by native polyacrylamide gel electrophoresis (PAGE). Upon infection, there is a strong accumulation of RIG-I and PKR oligomeric complexes, whereas these proteins remained as monomers in mock infected samples. Limited protease digestion and native PAGE, both coupled to western blot analysis, allow a sensitive and direct measurement of two diverse steps of RIG-I and PKR activation. These techniques are relatively easy and quick to perform and do not require expensive equipment.
Infectious Diseases, Issue 89, innate immune response, virus infection, pathogen recognition receptor, RIG-I, PKR, IRF-3, limited protease digestion, conformational switch, native PAGE, oligomerization
51415
Play Button
Viral Concentration Determination Through Plaque Assays: Using Traditional and Novel Overlay Systems
Authors: Alan Baer, Kylene Kehn-Hall.
Institutions: George Mason University.
Plaque assays remain one of the most accurate methods for the direct quantification of infectious virons and antiviral substances through the counting of discrete plaques (infectious units and cellular dead zones) in cell culture. Here we demonstrate how to perform a basic plaque assay, and how differing overlays and techniques can affect plaque formation and production. Typically solid or semisolid overlay substrates, such as agarose or carboxymethyl cellulose, have been used to restrict viral spread, preventing indiscriminate infection through the liquid growth medium. Immobilized overlays restrict cellular infection to the immediately surrounding monolayer, allowing the formation of discrete countable foci and subsequent plaque formation. To overcome the difficulties inherent in using traditional overlays, a novel liquid overlay utilizing microcrystalline cellulose and carboxymethyl cellulose sodium has been increasingly used as a replacement in the standard plaque assay. Liquid overlay plaque assays can be readily performed in either standard 6 or 12 well plate formats as per traditional techniques and require no special equipment. Due to its liquid state and subsequent ease of application and removal, microculture plate formats may alternatively be utilized as a rapid, accurate and high throughput alternative to larger scale viral titrations. Use of a non heated viscous liquid polymer offers the opportunity to streamline work, conserves reagents, incubator space, and increases operational safety when used in traditional or high containment labs as no reagent heating or glassware are required. Liquid overlays may also prove more sensitive than traditional overlays for certain heat labile viruses.
Virology, Issue 93, Plaque Assay, Virology, Viral Quantification, Cellular Overlays, Agarose, Avicel, Crystal Violet Staining, Serial Dilutions, Rift Valley fever virus, Venezuelan Equine Encephalitis, Influenza
52065
Play Button
Convergent Polishing: A Simple, Rapid, Full Aperture Polishing Process of High Quality Optical Flats & Spheres
Authors: Tayyab Suratwala, Rusty Steele, Michael Feit, Rebecca Dylla-Spears, Richard Desjardin, Dan Mason, Lana Wong, Paul Geraghty, Phil Miller, Nan Shen.
Institutions: Lawrence Livermore National Laboratory.
Convergent Polishing is a novel polishing system and method for finishing flat and spherical glass optics in which a workpiece, independent of its initial shape (i.e., surface figure), will converge to final surface figure with excellent surface quality under a fixed, unchanging set of polishing parameters in a single polishing iteration. In contrast, conventional full aperture polishing methods require multiple, often long, iterative cycles involving polishing, metrology and process changes to achieve the desired surface figure. The Convergent Polishing process is based on the concept of workpiece-lap height mismatch resulting in pressure differential that decreases with removal and results in the workpiece converging to the shape of the lap. The successful implementation of the Convergent Polishing process is a result of the combination of a number of technologies to remove all sources of non-uniform spatial material removal (except for workpiece-lap mismatch) for surface figure convergence and to reduce the number of rogue particles in the system for low scratch densities and low roughness. The Convergent Polishing process has been demonstrated for the fabrication of both flats and spheres of various shapes, sizes, and aspect ratios on various glass materials. The practical impact is that high quality optical components can be fabricated more rapidly, more repeatedly, with less metrology, and with less labor, resulting in lower unit costs. In this study, the Convergent Polishing protocol is specifically described for fabricating 26.5 cm square fused silica flats from a fine ground surface to a polished ~λ/2 surface figure after polishing 4 hr per surface on a 81 cm diameter polisher.
Physics, Issue 94, optical fabrication, pad polishing, fused silica glass, optical flats, optical spheres, ceria slurry, pitch button blocking, HF etching, scratches
51965
Play Button
Quantification of Global Diastolic Function by Kinematic Modeling-based Analysis of Transmitral Flow via the Parametrized Diastolic Filling Formalism
Authors: Sina Mossahebi, Simeng Zhu, Howard Chen, Leonid Shmuylovich, Erina Ghosh, Sándor J. Kovács.
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2, maximum A-V pressure gradient kxo, load independent index of diastolic function, etc.) and select the aspect of physiology or pathophysiology to be quantified.
Bioengineering, Issue 91, cardiovascular physiology, ventricular mechanics, diastolic function, mathematical modeling, Doppler echocardiography, hemodynamics, biomechanics
51471
Play Button
Quantitative Visualization and Detection of Skin Cancer Using Dynamic Thermal Imaging
Authors: Cila Herman, Muge Pirtini Cetingul.
Institutions: The Johns Hopkins University.
In 2010 approximately 68,720 melanomas will be diagnosed in the US alone, with around 8,650 resulting in death 1. To date, the only effective treatment for melanoma remains surgical excision, therefore, the key to extended survival is early detection 2,3. Considering the large numbers of patients diagnosed every year and the limitations in accessing specialized care quickly, the development of objective in vivo diagnostic instruments to aid the diagnosis is essential. New techniques to detect skin cancer, especially non-invasive diagnostic tools, are being explored in numerous laboratories. Along with the surgical methods, techniques such as digital photography, dermoscopy, multispectral imaging systems (MelaFind), laser-based systems (confocal scanning laser microscopy, laser doppler perfusion imaging, optical coherence tomography), ultrasound, magnetic resonance imaging, are being tested. Each technique offers unique advantages and disadvantages, many of which pose a compromise between effectiveness and accuracy versus ease of use and cost considerations. Details about these techniques and comparisons are available in the literature 4. Infrared (IR) imaging was shown to be a useful method to diagnose the signs of certain diseases by measuring the local skin temperature. There is a large body of evidence showing that disease or deviation from normal functioning are accompanied by changes of the temperature of the body, which again affect the temperature of the skin 5,6. Accurate data about the temperature of the human body and skin can provide a wealth of information on the processes responsible for heat generation and thermoregulation, in particular the deviation from normal conditions, often caused by disease. However, IR imaging has not been widely recognized in medicine due to the premature use of the technology 7,8 several decades ago, when temperature measurement accuracy and the spatial resolution were inadequate and sophisticated image processing tools were unavailable. This situation changed dramatically in the late 1990s-2000s. Advances in IR instrumentation, implementation of digital image processing algorithms and dynamic IR imaging, which enables scientists to analyze not only the spatial, but also the temporal thermal behavior of the skin 9, allowed breakthroughs in the field. In our research, we explore the feasibility of IR imaging, combined with theoretical and experimental studies, as a cost effective, non-invasive, in vivo optical measurement technique for tumor detection, with emphasis on the screening and early detection of melanoma 10-13. In this study, we show data obtained in a patient study in which patients that possess a pigmented lesion with a clinical indication for biopsy are selected for imaging. We compared the difference in thermal responses between healthy and malignant tissue and compared our data with biopsy results. We concluded that the increased metabolic activity of the melanoma lesion can be detected by dynamic infrared imaging.
Medicine, Issue 51, Infrared imaging, quantitative thermal analysis, image processing, skin cancer, melanoma, transient thermal response, skin thermal models, skin phantom experiment, patient study
2679
Play Button
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Authors: F. Mark Dunning, Timothy M. Piazza, Füsûn N. Zeytin, Ward C. Tucker.
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
51170
Play Button
Barnes Maze Testing Strategies with Small and Large Rodent Models
Authors: Cheryl S. Rosenfeld, Sherry A. Ferguson.
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g. bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g. distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e. random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g. shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
51194
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
51617
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Determination of Tolerable Fatty Acids and Cholera Toxin Concentrations Using Human Intestinal Epithelial Cells and BALB/c Mouse Macrophages
Authors: Farshad Tamari, Joanna Tychowski, Laura Lorentzen.
Institutions: Kingsborough Community College, University of Texas at Austin, Kean University.
The positive role of fatty acids in the prevention and alleviation of non-human and human diseases have been and continue to be extensively documented. These roles include influences on infectious and non-infectious diseases including prevention of inflammation as well as mucosal immunity to infectious diseases. Cholera is an acute intestinal illness caused by the bacterium Vibrio cholerae. It occurs in developing nations and if left untreated, can result in death. While vaccines for cholera exist, they are not always effective and other preventative methods are needed. We set out to determine tolerable concentrations of three fatty acids (oleic, linoleic and linolenic acids) and cholera toxin using mouse BALB/C macrophages and human intestinal epithelial cells, respectively. We solubilized the above fatty acids and used cell proliferation assays to determine the concentration ranges and specific concentrations of the fatty acids that are not detrimental to human intestinal epithelial cell viability. We solubilized cholera toxin and used it in an assay to determine the concentration ranges and specific concentrations of cholera toxin that do not statistically decrease cell viability in BALB/C macrophages. We found the optimum fatty acid concentrations to be between 1-5 ng/μl, and that for cholera toxin to be < 30 ng per treatment. This data may aid future studies that aim to find a protective mucosal role for fatty acids in prevention or alleviation of cholera infections.
Infection, Issue 75, Medicine, Immunology, Infectious Diseases, Microbiology, Molecular Biology, Cellular Biology, Biochemistry, Bioengineering, Bacterial Infections and Mycoses, Mucosal immunity, oleic acid, linoleic acid, linolenic acid, cholera toxin, cholera, fatty acids, tissue culture, MTT assay, mouse, animal model
50491
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Preventing the Spread of Malaria and Dengue Fever Using Genetically Modified Mosquitoes
Authors: Anthony A. James.
Institutions: University of California, Irvine (UCI).
In this candid interview, Anthony A. James explains how mosquito genetics can be exploited to control malaria and dengue transmission. Population replacement strategy, the idea that transgenic mosquitoes can be released into the wild to control disease transmission, is introduced, as well as the concept of genetic drive and the design criterion for an effective genetic drive system. The ethical considerations of releasing genetically-modified organisms into the wild are also discussed.
Cellular Biology, Issue 5, mosquito, malaria, dengue fever, genetics, infectious disease, Translational Research
231
Play Button
Biocontained Carcass Composting for Control of Infectious Disease Outbreak in Livestock
Authors: Tim Reuter, Weiping Xu, Trevor W. Alexander, Brandon H. Gilroyed, G. Douglas Inglis, Francis J. Larney, Kim Stanford, Tim A. McAllister.
Institutions: Lethbridge Research Centre, Dalian University of Technology, Alberta Agriculture and Rural Development.
Intensive livestock production systems are particularly vulnerable to natural or intentional (bioterrorist) infectious disease outbreaks. Large numbers of animals housed within a confined area enables rapid dissemination of most infectious agents throughout a herd. Rapid containment is key to controlling any infectious disease outbreak, thus depopulation is often undertaken to prevent spread of a pathogen to the larger livestock population. In that circumstance, a large number of livestock carcasses and contaminated manure are generated that require rapid disposal. Composting lends itself as a rapid-response disposal method for infected carcasses as well as manure and soil that may harbor infectious agents. We designed a bio-contained mortality composting procedure and tested its efficacy for bovine tissue degradation and microbial deactivation. We used materials available on-farm or purchasable from local farm supply stores in order that the system can be implemented at the site of a disease outbreak. In this study, temperatures exceeded 55°C for more than one month and infectious agents implanted in beef cattle carcasses and manure were inactivated within 14 days of composting. After 147 days, carcasses were almost completely degraded. The few long bones remaining were further degraded with an additional composting cycle in open windrows and the final mature compost was suitable for land application. Duplicate compost structures (final dimensions 25 m x 5 m x 2.4 m; L x W x H) were constructed using barley straw bales and lined with heavy black silage plastic sheeting. Each was loaded with loose straw, carcasses and manure totaling ~95,000 kg. A 40-cm base layer of loose barley straw was placed in each bunker, onto which were placed 16 feedlot cattle mortalities (average weight 343 kg) aligned transversely at a spacing of approximately 0.5 m. For passive aeration, lengths of flexible, perforated plastic drainage tubing (15 cm diameter) were placed between adjacent carcasses, extending vertically along both inside walls, and with the ends passed though the plastic to the exterior. The carcasses were overlaid with moist aerated feedlot manure (~1.6 m deep) to the top of the bunker. Plastic was folded over the top and sealed with tape to establish a containment barrier and eight aeration vents (50 x 50 x 15 cm) were placed on the top of each structure to promote passive aeration. After 147 days, losses of volume and mass of composted materials averaged 39.8% and 23.7%, respectively, in each structure.
JoVE Infectious Diseases, Issue 39, compost, livestock, infectious disease, biocontainment
1946
Play Button
Ole Isacson: Development of New Therapies for Parkinson's Disease
Authors: Ole Isacson.
Institutions: Harvard Medical School.
Medicine, Issue 3, Parkinson' disease, Neuroscience, dopamine, neuron, L-DOPA, stem cell, transplantation
189
Play Button
Population Replacement Strategies for Controlling Vector Populations and the Use of Wolbachia pipientis for Genetic Drive
Authors: Jason Rasgon.
Institutions: Johns Hopkins University.
In this video, Jason Rasgon discusses population replacement strategies to control vector-borne diseases such as malaria and dengue. "Population replacement" is the replacement of wild vector populations (that are competent to transmit pathogens) with those that are not competent to transmit pathogens. There are several theoretical strategies to accomplish this. One is to exploit the maternally-inherited symbiotic bacteria Wolbachia pipientis. Wolbachia is a widespread reproductive parasite that spreads in a selfish manner at the extent of its host's fitness. Jason Rasgon discusses, in detail, the basic biology of this bacterial symbiont and various ways to use it for control of vector-borne diseases.
Cellular Biology, Issue 5, mosquito, malaria, genetics, infectious disease, Wolbachia
225
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.