JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.
PLoS ONE
PUBLISHED: 01-01-2013
Recovery rate is essential to the estimation of the portfolios loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moodys. However, it has a fatal defect that it cant fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moodys new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.
Authors: Fijoy Vadakkumpadan, Hermenegild Arevalo, Natalia A. Trayanova.
Published: 01-08-2013
ABSTRACT
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via semiautomatic segmentation, from an in vivo computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
21 Related JoVE Articles!
Play Button
Scale-Up of Mammalian Cell Culture using a New Multilayered Flask
Authors: Elizabeth J. Abraham, Katie A. Slater, Suparna Sanyal, Ken Linehan, Paula M. Flaherty, Susan Qian.
Institutions: BD Biosciences .
A growing number of cell-based applications require large numbers of cells. Usage of single layer T-flasks, that are adequate during small-scale expansion, may become cumbersome, laborious and time-consuming when large numbers of cells are required. To address this need, the performance of a new multi-layered cell culture vessel to facilitate easy scale up of cells from single layered T-flasks will be discussed. The flasks tested are available in 3- and 5-layer format and enable culture and complete recovery of three and five times the number of cells respectively, compared to T-175 flasks. A key feature of the BD Multi-Flask is a mix/equilibration port that allows rapid in-vessel mixing as well as uniform distribution of cells and reagents within and between layers of each vessel and consistently produce cells that can be cultured in an environment that is congruent to T-175 flasks. The design of these Multi-Flasks also allows for convenient pipette access for adding reagents and cells directly into the flasks as well as efficient recovery of valuable cells and reagents and reduces risk of contamination due to pouring. For applications where pouring is preferred over pipetting, the design allows for minimal residual liquid retention so as to reduce wastage of valuable cells and reagents.
Basic Protocols, Issue 58, Multi-Flask, multi-layered, stackable, scale-up, cell culture, flasks
3418
Play Button
In situ Quantification of Pancreatic Beta-cell Mass in Mice
Authors: Abraham Kim, German Kilimnik, Manami Hara.
Institutions: University of Chicago.
Tracing changes of specific cell populations in health and disease is an important goal of biomedical research. The process of monitoring pancreatic beta-cell proliferation and islet growth is particularly challenging. We have developed a method to capture the distribution of beta-cells in the intact pancreas of transgenic mice with fluorescence-tagged beta-cells with a macro written for ImageJ (rsb.info.nih.gov/ij/). Following pancreatic dissection and tissue clearing, the entire pancreas is captured as a virtual slice, after which the GFP-tagged beta-cells are examined. The analysis includes the quantification of total beta-cell area, islet number and size distribution with reference to specific parameters and locations for each islet and for small clusters of beta-cells. The entire distribution of islets can be plotted in three dimensions, and the information from the distribution on the size and shape of each islet allows a quantitative and qualitative comparison of changes in overall beta-cell area at a glance.
Cellular Biology, Issue 40, beta-cells, islets, mouse, pancreas
1970
Play Button
Experimental Manipulation of Body Size to Estimate Morphological Scaling Relationships in Drosophila
Authors: R. Craig Stillwell, Ian Dworkin, Alexander W. Shingleton, W. Anthony Frankino.
Institutions: University of Houston, Michigan State University.
The scaling of body parts is a central feature of animal morphology1-7. Within species, morphological traits need to be correctly proportioned to the body for the organism to function; larger individuals typically have larger body parts and smaller individuals generally have smaller body parts, such that overall body shape is maintained across a range of adult body sizes. The requirement for correct proportions means that individuals within species usually exhibit low variation in relative trait size. In contrast, relative trait size can vary dramatically among species and is a primary mechanism by which morphological diversity is produced. Over a century of comparative work has established these intra- and interspecific patterns3,4. Perhaps the most widely used approach to describe this variation is to calculate the scaling relationship between the size of two morphological traits using the allometric equation y=bxα, where x and y are the size of the two traits, such as organ and body size8,9. This equation describes the within-group (e.g., species, population) scaling relationship between two traits as both vary in size. Log-transformation of this equation produces a simple linear equation, log(y) = log(b) + αlog(x) and log-log plots of the size of different traits among individuals of the same species typically reveal linear scaling with an intercept of log(b) and a slope of α, called the 'allometric coefficient'9,10. Morphological variation among groups is described by differences in scaling relationship intercepts or slopes for a given trait pair. Consequently, variation in the parameters of the allometric equation (b and α) elegantly describes the shape variation captured in the relationship between organ and body size within and among biological groups (see 11,12). Not all traits scale linearly with each other or with body size (e.g., 13,14) Hence, morphological scaling relationships are most informative when the data are taken from the full range of trait sizes. Here we describe how simple experimental manipulation of diet can be used to produce the full range of body size in insects. This permits an estimation of the full scaling relationship for any given pair of traits, allowing a complete description of how shape covaries with size and a robust comparison of scaling relationship parameters among biological groups. Although we focus on Drosophila, our methodology should be applicable to nearly any fully metamorphic insect.
Developmental Biology, Issue 56, Drosophila, allometry, morphology, body size, scaling, insect
3162
Play Button
Magnetic Resonance Imaging Quantification of Pulmonary Perfusion using Calibrated Arterial Spin Labeling
Authors: Tatsuya J. Arai, G. Kim Prisk, Sebastiaan Holverda, Rui Carlos Sá, Rebecca J. Theilmann, A. Cortney Henderson, Matthew V. Cronin, Richard B. Buxton, Susan R. Hopkins.
Institutions: University of California San Diego - UCSD, University of California San Diego - UCSD, University of California San Diego - UCSD.
This demonstrates a MR imaging method to measure the spatial distribution of pulmonary blood flow in healthy subjects during normoxia (inspired O2, fraction (FIO2) = 0.21) hypoxia (FIO2 = 0.125), and hyperoxia (FIO2 = 1.00). In addition, the physiological responses of the subject are monitored in the MR scan environment. MR images were obtained on a 1.5 T GE MRI scanner during a breath hold from a sagittal slice in the right lung at functional residual capacity. An arterial spin labeling sequence (ASL-FAIRER) was used to measure the spatial distribution of pulmonary blood flow 1,2 and a multi-echo fast gradient echo (mGRE) sequence 3 was used to quantify the regional proton (i.e. H2O) density, allowing the quantification of density-normalized perfusion for each voxel (milliliters blood per minute per gram lung tissue). With a pneumatic switching valve and facemask equipped with a 2-way non-rebreathing valve, different oxygen concentrations were introduced to the subject in the MR scanner through the inspired gas tubing. A metabolic cart collected expiratory gas via expiratory tubing. Mixed expiratory O2 and CO2 concentrations, oxygen consumption, carbon dioxide production, respiratory exchange ratio, respiratory frequency and tidal volume were measured. Heart rate and oxygen saturation were monitored using pulse-oximetry. Data obtained from a normal subject showed that, as expected, heart rate was higher in hypoxia (60 bpm) than during normoxia (51) or hyperoxia (50) and the arterial oxygen saturation (SpO2) was reduced during hypoxia to 86%. Mean ventilation was 8.31 L/min BTPS during hypoxia, 7.04 L/min during normoxia, and 6.64 L/min during hyperoxia. Tidal volume was 0.76 L during hypoxia, 0.69 L during normoxia, and 0.67 L during hyperoxia. Representative quantified ASL data showed that the mean density normalized perfusion was 8.86 ml/min/g during hypoxia, 8.26 ml/min/g during normoxia and 8.46 ml/min/g during hyperoxia, respectively. In this subject, the relative dispersion4, an index of global heterogeneity, was increased in hypoxia (1.07 during hypoxia, 0.85 during normoxia, and 0.87 during hyperoxia) while the fractal dimension (Ds), another index of heterogeneity reflecting vascular branching structure, was unchanged (1.24 during hypoxia, 1.26 during normoxia, and 1.26 during hyperoxia). Overview. This protocol will demonstrate the acquisition of data to measure the distribution of pulmonary perfusion noninvasively under conditions of normoxia, hypoxia, and hyperoxia using a magnetic resonance imaging technique known as arterial spin labeling (ASL). Rationale: Measurement of pulmonary blood flow and lung proton density using MR technique offers high spatial resolution images which can be quantified and the ability to perform repeated measurements under several different physiological conditions. In human studies, PET, SPECT, and CT are commonly used as the alternative techniques. However, these techniques involve exposure to ionizing radiation, and thus are not suitable for repeated measurements in human subjects.
Medicine, Issue 51, arterial spin labeling, lung proton density, functional lung imaging, hypoxic pulmonary vasoconstriction, oxygen consumption, ventilation, magnetic resonance imaging
2712
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Computer-assisted Large-scale Visualization and Quantification of Pancreatic Islet Mass, Size Distribution and Architecture
Authors: Abraham Kim, German Kilimnik, Charles Guo, Joshua Sung, Junghyo Jo, Vipul Periwal, Piotr Witkowski, Philip Dilorio, Manami Hara.
Institutions: University of Chicago, National Institutes of Health, University of Chicago, University of Massachusetts.
The pancreatic islet is a unique micro-organ composed of several hormone secreting endocrine cells such as beta-cells (insulin), alpha-cells (glucagon), and delta-cells (somatostatin) that are embedded in the exocrine tissues and comprise 1-2% of the entire pancreas. There is a close correlation between body and pancreas weight. Total beta-cell mass also increases proportionately to compensate for the demand for insulin in the body. What escapes this proportionate expansion is the size distribution of islets. Large animals such as humans share similar islet size distributions with mice, suggesting that this micro-organ has a certain size limit to be functional. The inability of large animal pancreata to generate proportionately larger islets is compensated for by an increase in the number of islets and by an increase in the proportion of larger islets in their overall islet size distribution. Furthermore, islets exhibit a striking plasticity in cellular composition and architecture among different species and also within the same species under various pathophysiological conditions. In the present study, we describe novel approaches for the analysis of biological image data in order to facilitate the automation of analytic processes, which allow for the analysis of large and heterogeneous data collections in the study of such dynamic biological processes and complex structures. Such studies have been hampered due to technical difficulties of unbiased sampling and generating large-scale data sets to precisely capture the complexity of biological processes of islet biology. Here we show methods to collect unbiased "representative" data within the limited availability of samples (or to minimize the sample collection) and the standard experimental settings, and to precisely analyze the complex three-dimensional structure of the islet. Computer-assisted automation allows for the collection and analysis of large-scale data sets and also assures unbiased interpretation of the data. Furthermore, the precise quantification of islet size distribution and spatial coordinates (i.e. X, Y, Z-positions) not only leads to an accurate visualization of pancreatic islet structure and composition, but also allows us to identify patterns during development and adaptation to altering conditions through mathematical modeling. The methods developed in this study are applicable to studies of many other systems and organisms as well.
Cellular Biology, Issue 49, beta-cells, islets, large-scale analysis, pancreas
2471
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
RNA-seq Analysis of Transcriptomes in Thrombin-treated and Control Human Pulmonary Microvascular Endothelial Cells
Authors: Dilyara Cheranova, Margaret Gibson, Suman Chaudhary, Li Qin Zhang, Daniel P. Heruth, Dmitry N. Grigoryev, Shui Qing Ye.
Institutions: Children's Mercy Hospital and Clinics, School of Medicine, University of Missouri-Kansas City.
The characterization of gene expression in cells via measurement of mRNA levels is a useful tool in determining how the transcriptional machinery of the cell is affected by external signals (e.g. drug treatment), or how cells differ between a healthy state and a diseased state. With the advent and continuous refinement of next-generation DNA sequencing technology, RNA-sequencing (RNA-seq) has become an increasingly popular method of transcriptome analysis to catalog all species of transcripts, to determine the transcriptional structure of all expressed genes and to quantify the changing expression levels of the total set of transcripts in a given cell, tissue or organism1,2 . RNA-seq is gradually replacing DNA microarrays as a preferred method for transcriptome analysis because it has the advantages of profiling a complete transcriptome, providing a digital type datum (copy number of any transcript) and not relying on any known genomic sequence3. Here, we present a complete and detailed protocol to apply RNA-seq to profile transcriptomes in human pulmonary microvascular endothelial cells with or without thrombin treatment. This protocol is based on our recent published study entitled "RNA-seq Reveals Novel Transcriptome of Genes and Their Isoforms in Human Pulmonary Microvascular Endothelial Cells Treated with Thrombin,"4 in which we successfully performed the first complete transcriptome analysis of human pulmonary microvascular endothelial cells treated with thrombin using RNA-seq. It yielded unprecedented resources for further experimentation to gain insights into molecular mechanisms underlying thrombin-mediated endothelial dysfunction in the pathogenesis of inflammatory conditions, cancer, diabetes, and coronary heart disease, and provides potential new leads for therapeutic targets to those diseases. The descriptive text of this protocol is divided into four parts. The first part describes the treatment of human pulmonary microvascular endothelial cells with thrombin and RNA isolation, quality analysis and quantification. The second part describes library construction and sequencing. The third part describes the data analysis. The fourth part describes an RT-PCR validation assay. Representative results of several key steps are displayed. Useful tips or precautions to boost success in key steps are provided in the Discussion section. Although this protocol uses human pulmonary microvascular endothelial cells treated with thrombin, it can be generalized to profile transcriptomes in both mammalian and non-mammalian cells and in tissues treated with different stimuli or inhibitors, or to compare transcriptomes in cells or tissues between a healthy state and a disease state.
Genetics, Issue 72, Molecular Biology, Immunology, Medicine, Genomics, Proteins, RNA-seq, Next Generation DNA Sequencing, Transcriptome, Transcription, Thrombin, Endothelial cells, high-throughput, DNA, genomic DNA, RT-PCR, PCR
4393
Play Button
ScanLag: High-throughput Quantification of Colony Growth and Lag Time
Authors: Irit Levin-Reisman, Ofer Fridman, Nathalie Q. Balaban.
Institutions: The Hebrew University of Jerusalem.
Growth dynamics are fundamental characteristics of microorganisms. Quantifying growth precisely is an important goal in microbiology. Growth dynamics are affected both by the doubling time of the microorganism and by any delay in growth upon transfer from one condition to another, the lag. The ScanLag method enables the characterization of these two independent properties at the level of colonies originating each from a single cell, generating a two-dimensional distribution of the lag time and of the growth time. In ScanLag, measurement of the time it takes for colonies on conventional nutrient agar plates to be detected is automated on an array of commercial scanners controlled by an in house application. Petri dishes are placed on the scanners, and the application acquires images periodically. Automated analysis of colony growth is then done by an application that returns the appearance time and growth rate of each colony. Other parameters, such as the shape, texture and color of the colony, can be extracted for multidimensional mapping of sub-populations of cells. Finally, the method enables the retrieval of rare variants with specific growth phenotypes for further characterization. The technique could be applied in bacteriology for the identification of long lag that can cause persistence to antibiotics, as well as a general low cost technique for phenotypic screens.
Immunology, Issue 89, lag, growth rate, growth delay, single cell, scanners, image analysis, persistence, resistance, rare mutants, phenotypic screens, phenomics
51456
Play Button
Metabolic Labeling of Newly Transcribed RNA for High Resolution Gene Expression Profiling of RNA Synthesis, Processing and Decay in Cell Culture
Authors: Bernd Rädle, Andrzej J. Rutkowski, Zsolt Ruzsics, Caroline C. Friedel, Ulrich H. Koszinowski, Lars Dölken.
Institutions: Max von Pettenkofer Institute, University of Cambridge, Ludwig-Maximilians-University Munich.
The development of whole-transcriptome microarrays and next-generation sequencing has revolutionized our understanding of the complexity of cellular gene expression. Along with a better understanding of the involved molecular mechanisms, precise measurements of the underlying kinetics have become increasingly important. Here, these powerful methodologies face major limitations due to intrinsic properties of the template samples they study, i.e. total cellular RNA. In many cases changes in total cellular RNA occur either too slowly or too quickly to represent the underlying molecular events and their kinetics with sufficient resolution. In addition, the contribution of alterations in RNA synthesis, processing, and decay are not readily differentiated. We recently developed high-resolution gene expression profiling to overcome these limitations. Our approach is based on metabolic labeling of newly transcribed RNA with 4-thiouridine (thus also referred to as 4sU-tagging) followed by rigorous purification of newly transcribed RNA using thiol-specific biotinylation and streptavidin-coated magnetic beads. It is applicable to a broad range of organisms including vertebrates, Drosophila, and yeast. We successfully applied 4sU-tagging to study real-time kinetics of transcription factor activities, provide precise measurements of RNA half-lives, and obtain novel insights into the kinetics of RNA processing. Finally, computational modeling can be employed to generate an integrated, comprehensive analysis of the underlying molecular mechanisms.
Genetics, Issue 78, Cellular Biology, Molecular Biology, Microbiology, Biochemistry, Eukaryota, Investigative Techniques, Biological Phenomena, Gene expression profiling, RNA synthesis, RNA processing, RNA decay, 4-thiouridine, 4sU-tagging, microarray analysis, RNA-seq, RNA, DNA, PCR, sequencing
50195
Play Button
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli.
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
51464
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Authors: Carmine Di Rienzo, Enrico Gratton, Fabio Beltram, Francesco Cardarelli.
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
51994
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
50130
Play Button
Analyzing Protein Dynamics Using Hydrogen Exchange Mass Spectrometry
Authors: Nikolai Hentze, Matthias P. Mayer.
Institutions: University of Heidelberg.
All cellular processes depend on the functionality of proteins. Although the functionality of a given protein is the direct consequence of its unique amino acid sequence, it is only realized by the folding of the polypeptide chain into a single defined three-dimensional arrangement or more commonly into an ensemble of interconverting conformations. Investigating the connection between protein conformation and its function is therefore essential for a complete understanding of how proteins are able to fulfill their great variety of tasks. One possibility to study conformational changes a protein undergoes while progressing through its functional cycle is hydrogen-1H/2H-exchange in combination with high-resolution mass spectrometry (HX-MS). HX-MS is a versatile and robust method that adds a new dimension to structural information obtained by e.g. crystallography. It is used to study protein folding and unfolding, binding of small molecule ligands, protein-protein interactions, conformational changes linked to enzyme catalysis, and allostery. In addition, HX-MS is often used when the amount of protein is very limited or crystallization of the protein is not feasible. Here we provide a general protocol for studying protein dynamics with HX-MS and describe as an example how to reveal the interaction interface of two proteins in a complex.   
Chemistry, Issue 81, Molecular Chaperones, mass spectrometers, Amino Acids, Peptides, Proteins, Enzymes, Coenzymes, Protein dynamics, conformational changes, allostery, protein folding, secondary structure, mass spectrometry
50839
Play Button
Reconstitution of a Kv Channel into Lipid Membranes for Structural and Functional Studies
Authors: Sungsoo Lee, Hui Zheng, Liang Shi, Qiu-Xing Jiang.
Institutions: University of Texas Southwestern Medical Center at Dallas.
To study the lipid-protein interaction in a reductionistic fashion, it is necessary to incorporate the membrane proteins into membranes of well-defined lipid composition. We are studying the lipid-dependent gating effects in a prototype voltage-gated potassium (Kv) channel, and have worked out detailed procedures to reconstitute the channels into different membrane systems. Our reconstitution procedures take consideration of both detergent-induced fusion of vesicles and the fusion of protein/detergent micelles with the lipid/detergent mixed micelles as well as the importance of reaching an equilibrium distribution of lipids among the protein/detergent/lipid and the detergent/lipid mixed micelles. Our data suggested that the insertion of the channels in the lipid vesicles is relatively random in orientations, and the reconstitution efficiency is so high that no detectable protein aggregates were seen in fractionation experiments. We have utilized the reconstituted channels to determine the conformational states of the channels in different lipids, record electrical activities of a small number of channels incorporated in planar lipid bilayers, screen for conformation-specific ligands from a phage-displayed peptide library, and support the growth of 2D crystals of the channels in membranes. The reconstitution procedures described here may be adapted for studying other membrane proteins in lipid bilayers, especially for the investigation of the lipid effects on the eukaryotic voltage-gated ion channels.
Molecular Biology, Issue 77, Biochemistry, Genetics, Cellular Biology, Structural Biology, Biophysics, Membrane Lipids, Phospholipids, Carrier Proteins, Membrane Proteins, Micelles, Molecular Motor Proteins, life sciences, biochemistry, Amino Acids, Peptides, and Proteins, lipid-protein interaction, channel reconstitution, lipid-dependent gating, voltage-gated ion channel, conformation-specific ligands, lipids
50436
Play Button
Simultaneous Quantification of T-Cell Receptor Excision Circles (TRECs) and K-Deleting Recombination Excision Circles (KRECs) by Real-time PCR
Authors: Alessandra Sottini, Federico Serana, Diego Bertoli, Marco Chiarini, Monica Valotti, Marion Vaglio Tessitore, Luisa Imberti.
Institutions: Spedali Civili di Brescia.
T-cell receptor excision circles (TRECs) and K-deleting recombination excision circles (KRECs) are circularized DNA elements formed during recombination process that creates T- and B-cell receptors. Because TRECs and KRECs are unable to replicate, they are diluted after each cell division, and therefore persist in the cell. Their quantity in peripheral blood can be considered as an estimation of thymic and bone marrow output. By combining well established and commonly used TREC assay with a modified version of KREC assay, we have developed a duplex quantitative real-time PCR that allows quantification of both newly-produced T and B lymphocytes in a single assay. The number of TRECs and KRECs are obtained using a standard curve prepared by serially diluting TREC and KREC signal joints cloned in a bacterial plasmid, together with a fragment of T-cell receptor alpha constant gene that serves as reference gene. Results are reported as number of TRECs and KRECs/106 cells or per ml of blood. The quantification of these DNA fragments have been proven useful for monitoring immune reconstitution following bone marrow transplantation in both children and adults, for improved characterization of immune deficiencies, or for better understanding of certain immunomodulating drug activity.
Immunology, Issue 94, B lymphocytes, primary immunodeficiency, real-time PCR, immune recovery, T-cell homeostasis, T lymphocytes, thymic output, bone marrow output
52184
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.