JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Employing a monte carlo algorithm in newton-type methods for restricted maximum likelihood estimation of genetic parameters.
PLoS ONE
PUBLISHED: 01-01-2013
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broydens method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broydens method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Authors: Jonathan S. Schilling, K. Brook Jacobson.
Published: 02-03-2011
ABSTRACT
The two principal methods for studying fungal biodegradation of lignocellulosic plant tissues were developed for wood preservative testing (soil-block; agar-block). It is well-accepted that soil-block microcosms yield higher decay rates, fewer moisture issues, lower variability among studies, and higher thresholds of preservative toxicity. Soil-block testing is thus the more utilized technique and has been standardized by American Society for Testing and Materials (ASTM) (method D 1413-07). The soil-block design has drawbacks, however, using locally-variable soil sources and in limiting the control of nutrients external (exogenous) to the decaying tissues. These drawbacks have emerged as a problem in applying this method to other, increasingly popular research aims. These modern aims include degrading lignocellulosics for bioenergy research, testing bioremediation of co-metabolized toxics, evaluating oxidative mechanisms, and tracking translocated elements along hyphal networks. Soil-blocks do not lend enough control in these applications. A refined agar-block approach is necessary. Here, we use the brown rot wood-degrading fungus Serpula lacrymans to degrade wood in agar-block microcosms, using deep Petri dishes with low-calcium agar. We test the role of exogenous gypsum on decay in a time-series, to demonstrate the utility and expected variability. Blocks from a single board rip (longitudinal cut) are conditioned, weighed, autoclaved, and introduced aseptically atop plastic mesh. Fungal inoculations are at each block face, with exogenous gypsum added at interfaces. Harvests are aseptic until the final destructive harvest. These microcosms are designed to avoid block contact with agar or Petri dish walls. Condensation is minimized during plate pours and during incubation. Finally, inoculum/gypsum/wood spacing is minimized but without allowing contact. These less technical aspects of agar-block design are also the most common causes of failure and the key source of variability among studies. Video publication is therefore useful in this case, and we demonstrate low-variability, high-quality results.
24 Related JoVE Articles!
Play Button
Selective Capture of 5-hydroxymethylcytosine from Genomic DNA
Authors: Yujing Li, Chun-Xiao Song, Chuan He, Peng Jin.
Institutions: Emory University School of Medicine, The University of Chicago.
5-methylcytosine (5-mC) constitutes ~2-8% of the total cytosines in human genomic DNA and impacts a broad range of biological functions, including gene expression, maintenance of genome integrity, parental imprinting, X-chromosome inactivation, regulation of development, aging, and cancer1. Recently, the presence of an oxidized 5-mC, 5-hydroxymethylcytosine (5-hmC), was discovered in mammalian cells, in particular in embryonic stem (ES) cells and neuronal cells2-4. 5-hmC is generated by oxidation of 5-mC catalyzed by TET family iron (II)/α-ketoglutarate-dependent dioxygenases2, 3. 5-hmC is proposed to be involved in the maintenance of embryonic stem (mES) cell, normal hematopoiesis and malignancies, and zygote development2, 5-10. To better understand the function of 5-hmC, a reliable and straightforward sequencing system is essential. Traditional bisulfite sequencing cannot distinguish 5-hmC from 5-mC11. To unravel the biology of 5-hmC, we have developed a highly efficient and selective chemical approach to label and capture 5-hmC, taking advantage of a bacteriophage enzyme that adds a glucose moiety to 5-hmC specifically12. Here we describe a straightforward two-step procedure for selective chemical labeling of 5-hmC. In the first labeling step, 5-hmC in genomic DNA is labeled with a 6-azide-glucose catalyzed by β-GT, a glucosyltransferase from T4 bacteriophage, in a way that transfers the 6-azide-glucose to 5-hmC from the modified cofactor, UDP-6-N3-Glc (6-N3UDPG). In the second step, biotinylation, a disulfide biotin linker is attached to the azide group by click chemistry. Both steps are highly specific and efficient, leading to complete labeling regardless of the abundance of 5-hmC in genomic regions and giving extremely low background. Following biotinylation of 5-hmC, the 5-hmC-containing DNA fragments are then selectively captured using streptavidin beads in a density-independent manner. The resulting 5-hmC-enriched DNA fragments could be used for downstream analyses, including next-generation sequencing. Our selective labeling and capture protocol confers high sensitivity, applicable to any source of genomic DNA with variable/diverse 5-hmC abundances. Although the main purpose of this protocol is its downstream application (i.e., next-generation sequencing to map out the 5-hmC distribution in genome), it is compatible with single-molecule, real-time SMRT (DNA) sequencing, which is capable of delivering single-base resolution sequencing of 5-hmC.
Genetics, Issue 68, Chemistry, Biophysics, 5-Hydroxymethylcytosine, chemical labeling, genomic DNA, high-throughput sequencing
4441
Play Button
Simultaneous Electroencephalography, Real-time Measurement of Lactate Concentration and Optogenetic Manipulation of Neuronal Activity in the Rodent Cerebral Cortex
Authors: William C. Clegern, Michele E. Moore, Michelle A. Schmidt, Jonathan Wisor.
Institutions: Washington State University.
Although the brain represents less than 5% of the body by mass, it utilizes approximately one quarter of the glucose used by the body at rest1. The function of non rapid eye movement sleep (NREMS), the largest portion of sleep by time, is uncertain. However, one salient feature of NREMS is a significant reduction in the rate of cerebral glucose utilization relative to wakefulness2-4. This and other findings have led to the widely held belief that sleep serves a function related to cerebral metabolism. Yet, the mechanisms underlying the reduction in cerebral glucose metabolism during NREMS remain to be elucidated. One phenomenon associated with NREMS that might impact cerebral metabolic rate is the occurrence of slow waves, oscillations at frequencies less than 4 Hz, in the electroencephalogram5,6. These slow waves detected at the level of the skull or cerebral cortical surface reflect the oscillations of underlying neurons between a depolarized/up state and a hyperpolarized/down state7. During the down state, cells do not undergo action potentials for intervals of up to several hundred milliseconds. Restoration of ionic concentration gradients subsequent to action potentials represents a significant metabolic load on the cell8; absence of action potentials during down states associated with NREMS may contribute to reduced metabolism relative to wake. Two technical challenges had to be addressed in order for this hypothetical relationship to be tested. First, it was necessary to measure cerebral glycolytic metabolism with a temporal resolution reflective of the dynamics of the cerebral EEG (that is, over seconds rather than minutes). To do so, we measured the concentration of lactate, the product of aerobic glycolysis, and therefore a readout of the rate of glucose metabolism in the brains of mice. Lactate was measured using a lactate oxidase based real time sensor embedded in the frontal cortex. The sensing mechanism consists of a platinum-iridium electrode surrounded by a layer of lactate oxidase molecules. Metabolism of lactate by lactate oxidase produces hydrogen peroxide, which produces a current in the platinum-iridium electrode. So a ramping up of cerebral glycolysis provides an increase in the concentration of substrate for lactate oxidase, which then is reflected in increased current at the sensing electrode. It was additionally necessary to measure these variables while manipulating the excitability of the cerebral cortex, in order to isolate this variable from other facets of NREMS. We devised an experimental system for simultaneous measurement of neuronal activity via the elecetroencephalogram, measurement of glycolytic flux via a lactate biosensor, and manipulation of cerebral cortical neuronal activity via optogenetic activation of pyramidal neurons. We have utilized this system to document the relationship between sleep-related electroencephalographic waveforms and the moment-to-moment dynamics of lactate concentration in the cerebral cortex. The protocol may be useful for any individual interested in studying, in freely behaving rodents, the relationship between neuronal activity measured at the electroencephalographic level and cellular energetics within the brain.
Neuroscience, Issue 70, Physiology, Anatomy, Medicine, Pharmacology, Surgery, Sleep, rapid eye movement, glucose, glycolysis, pyramidal neurons, channelrhodopsin, optogenetics, optogenetic stimulation, electroencephalogram, EEG, EMG, brain, animal model
4328
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
A Practical Guide to Phylogenetics for Nonexperts
Authors: Damien O'Halloran.
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
50975
Play Button
Mapping Molecular Diffusion in the Plasma Membrane by Multiple-Target Tracing (MTT)
Authors: Vincent Rouger, Nicolas Bertaux, Tomasz Trombik, Sébastien Mailfert, Cyrille Billaudeau, Didier Marguet, Arnauld Sergé.
Institutions: Parc scientifique de Luminy, Parc scientifique de Luminy, Aix-Marseille University, Technopôle de Château-Gombert, Aix-Marseille University, Aix-Marseille University.
Our goal is to obtain a comprehensive description of molecular processes occurring at cellular membranes in different biological functions. We aim at characterizing the complex organization and dynamics of the plasma membrane at single-molecule level, by developing analytic tools dedicated to Single-Particle Tracking (SPT) at high density: Multiple-Target Tracing (MTT)1. Single-molecule videomicroscopy, offering millisecond and nanometric resolution1-11, allows a detailed representation of membrane organization12-14 by accurately mapping descriptors such as cell receptors localization, mobility, confinement or interactions. We revisited SPT, both experimentally and algorithmically. Experimental aspects included optimizing setup and cell labeling, with a particular emphasis on reaching the highest possible labeling density, in order to provide a dynamic snapshot of molecular dynamics as it occurs within the membrane. Algorithmic issues concerned each step used for rebuilding trajectories: peaks detection, estimation and reconnection, addressed by specific tools from image analysis15,16. Implementing deflation after detection allows rescuing peaks initially hidden by neighboring, stronger peaks. Of note, improving detection directly impacts reconnection, by reducing gaps within trajectories. Performances have been evaluated using Monte-Carlo simulations for various labeling density and noise values, which typically represent the two major limitations for parallel measurements at high spatiotemporal resolution. The nanometric accuracy17 obtained for single molecules, using either successive on/off photoswitching or non-linear optics, can deliver exhaustive observations. This is the basis of nanoscopy methods17 such as STORM18, PALM19,20, RESOLFT21 or STED22,23, which may often require imaging fixed samples. The central task is the detection and estimation of diffraction-limited peaks emanating from single-molecules. Hence, providing adequate assumptions such as handling a constant positional accuracy instead of Brownian motion, MTT is straightforwardly suited for nanoscopic analyses. Furthermore, MTT can fundamentally be used at any scale: not only for molecules, but also for cells or animals, for instance. Hence, MTT is a powerful tracking algorithm that finds applications at molecular and cellular scales.
Physics, Issue 63, Single-particle tracking, single-molecule fluorescence microscopy, image analysis, tracking algorithm, high-resolution diffusion map, plasma membrane lateral organization
3599
Play Button
Optical Recording of Suprathreshold Neural Activity with Single-cell and Single-spike Resolution
Authors: Gayathri Nattar Ranganathan, Helmut J. Koester.
Institutions: The University of Texas at Austin.
Signaling of information in the vertebrate central nervous system is often carried by populations of neurons rather than individual neurons. Also propagation of suprathreshold spiking activity involves populations of neurons. Empirical studies addressing cortical function directly thus require recordings from populations of neurons with high resolution. Here we describe an optical method and a deconvolution algorithm to record neural activity from up to 100 neurons with single-cell and single-spike resolution. This method relies on detection of the transient increases in intracellular somatic calcium concentration associated with suprathreshold electrical spikes (action potentials) in cortical neurons. High temporal resolution of the optical recordings is achieved by a fast random-access scanning technique using acousto-optical deflectors (AODs)1. Two-photon excitation of the calcium-sensitive dye results in high spatial resolution in opaque brain tissue2. Reconstruction of spikes from the fluorescence calcium recordings is achieved by a maximum-likelihood method. Simultaneous electrophysiological and optical recordings indicate that our method reliably detects spikes (>97% spike detection efficiency), has a low rate of false positive spike detection (< 0.003 spikes/sec), and a high temporal precision (about 3 msec) 3. This optical method of spike detection can be used to record neural activity in vitro and in anesthetized animals in vivo3,4.
Neuroscience, Issue 67, functional calcium imaging, spatiotemporal patterns of activity, dithered random-access scanning
4052
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Measurement and Analysis of Atomic Hydrogen and Diatomic Molecular AlO, C2, CN, and TiO Spectra Following Laser-induced Optical Breakdown
Authors: Christian G. Parigger, Alexander C. Woods, Michael J. Witte, Lauren D. Swafford, David M. Surmick.
Institutions: University of Tennessee Space Institute.
In this work, we present time-resolved measurements of atomic and diatomic spectra following laser-induced optical breakdown. A typical LIBS arrangement is used. Here we operate a Nd:YAG laser at a frequency of 10 Hz at the fundamental wavelength of 1,064 nm. The 14 nsec pulses with anenergy of 190 mJ/pulse are focused to a 50 µm spot size to generate a plasma from optical breakdown or laser ablation in air. The microplasma is imaged onto the entrance slit of a 0.6 m spectrometer, and spectra are recorded using an 1,800 grooves/mm grating an intensified linear diode array and optical multichannel analyzer (OMA) or an ICCD. Of interest are Stark-broadened atomic lines of the hydrogen Balmer series to infer electron density. We also elaborate on temperature measurements from diatomic emission spectra of aluminum monoxide (AlO), carbon (C2), cyanogen (CN), and titanium monoxide (TiO). The experimental procedures include wavelength and sensitivity calibrations. Analysis of the recorded molecular spectra is accomplished by the fitting of data with tabulated line strengths. Furthermore, Monte-Carlo type simulations are performed to estimate the error margins. Time-resolved measurements are essential for the transient plasma commonly encountered in LIBS.
Physics, Issue 84, Laser Induced Breakdown Spectroscopy, Laser Ablation, Molecular Spectroscopy, Atomic Spectroscopy, Plasma Diagnostics
51250
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Computed Tomography-guided Time-domain Diffuse Fluorescence Tomography in Small Animals for Localization of Cancer Biomarkers
Authors: Kenneth M. Tichauer, Robert W. Holt, Kimberley S. Samkoe, Fadi El-Ghussein, Jason R. Gunn, Michael Jermyn, Hamid Dehghani, Frederic Leblond, Brian W. Pogue.
Institutions: Dartmouth College, Dartmouth College, Dartmouth College, University of Birmingham .
Small animal fluorescence molecular imaging (FMI) can be a powerful tool for preclinical drug discovery and development studies1. However, light absorption by tissue chromophores (e.g., hemoglobin, water, lipids, melanin) typically limits optical signal propagation through thicknesses larger than a few millimeters2. Compared to other visible wavelengths, tissue absorption for red and near-infrared (near-IR) light absorption dramatically decreases and non-elastic scattering becomes the dominant light-tissue interaction mechanism. The relatively recent development of fluorescent agents that absorb and emit light in the near-IR range (600-1000 nm), has driven the development of imaging systems and light propagation models that can achieve whole body three-dimensional imaging in small animals3. Despite great strides in this area, the ill-posed nature of diffuse fluorescence tomography remains a significant problem for the stability, contrast recovery and spatial resolution of image reconstruction techniques and the optimal approach to FMI in small animals has yet to be agreed on. The majority of research groups have invested in charge-coupled device (CCD)-based systems that provide abundant tissue-sampling but suboptimal sensitivity4-9, while our group and a few others10-13 have pursued systems based on very high sensitivity detectors, that at this time allow dense tissue sampling to be achieved only at the cost of low imaging throughput. Here we demonstrate the methodology for applying single-photon detection technology in a fluorescence tomography system to localize a cancerous brain lesion in a mouse model. The fluorescence tomography (FT) system employed single photon counting using photomultiplier tubes (PMT) and information-rich time-domain light detection in a non-contact conformation11. This provides a simultaneous collection of transmitted excitation and emission light, and includes automatic fluorescence excitation exposure control14, laser referencing, and co-registration with a small animal computed tomography (microCT) system15. A nude mouse model was used for imaging. The animal was inoculated orthotopically with a human glioma cell line (U251) in the left cerebral hemisphere and imaged 2 weeks later. The tumor was made to fluoresce by injecting a fluorescent tracer, IRDye 800CW-EGF (LI-COR Biosciences, Lincoln, NE) targeted to epidermal growth factor receptor, a cell membrane protein known to be overexpressed in the U251 tumor line and many other cancers18. A second, untargeted fluorescent tracer, Alexa Fluor 647 (Life Technologies, Grand Island, NY) was also injected to account for non-receptor mediated effects on the uptake of the targeted tracers to provide a means of quantifying tracer binding and receptor availability/density27. A CT-guided, time-domain algorithm was used to reconstruct the location of both fluorescent tracers (i.e., the location of the tumor) in the mouse brain and their ability to localize the tumor was verified by contrast-enhanced magnetic resonance imaging. Though demonstrated for fluorescence imaging in a glioma mouse model, the methodology presented in this video can be extended to different tumor models in various small animal models potentially up to the size of a rat17.
Cancer Biology, Issue 65, Medicine, Physics, Molecular Biology, fluorescence, glioma, light transport, tomography, CT, molecular imaging, epidermal growth factor receptor, biomarker
4050
Play Button
Real-time Cytotoxicity Assays in Human Whole Blood
Authors: Ching-Wen Hsiao, Yen-Ting Lo, Hong Liu, Sonny C. Hsiao.
Institutions: Adheren, Inc, Eureka Therapeutics.
A live cell-based whole blood cytotoxicity assay (WCA) that allows access to temporal information of the overall cell cytotoxicity is developed with high-throughput cell positioning technology. The targeted tumor cell populations are first preprogrammed to immobilization into an array format, and labeled with green fluorescent cytosolic dyes. Following the cell array formation, antibody drugs are added in combination with human whole blood. Propidium iodide (PI) is then added to assess cell death. The cell array is analyzed with an automatic imaging system. While cytosolic dye labels the targeted tumor cell populations, PI labels the dead tumor cell populations. Thus, the percentage of target cancer cell killing can be quantified by calculating the number of surviving targeted cells to the number of dead targeted cells. With this method, researchers are able to access time-dependent and dose-dependent cell cytotoxicity information. Remarkably, no hazardous radiochemicals are used. The WCA presented here has been tested with lymphoma, leukemia, and solid tumor cell lines. Therefore, WCA allows researchers to assess drug efficacy in a highly relevant ex vivo condition.
Medicine, Issue 93, whole blood assay, cytotoxicity assay, cell array, single cell array, drug screening, cancer drug screening, whole blood cytotoxicity assay, real-time cytotoxicity assay, high content imaging, high throughput imaging, cell-based assay.
51941
Play Button
Semi-automated Optical Heartbeat Analysis of Small Hearts
Authors: Karen Ocorr, Martin Fink, Anthony Cammarato, Sanford I. Bernstein, Rolf Bodmer.
Institutions: The Sanford Burnham Institute for Medical Research, The Sanford Burnham Institute for Medical Research, San Diego State University.
We have developed a method for analyzing high speed optical recordings from Drosophila, zebrafish and embryonic mouse hearts (Fink, et. al., 2009). Our Semi-automatic Optical Heartbeat Analysis (SOHA) uses a novel movement detection algorithm that is able to detect cardiac movements associated with individual contractile and relaxation events. The program provides a host of physiologically relevant readouts including systolic and diastolic intervals, heart rate, as well as qualitative and quantitative measures of heartbeat arrhythmicity. The program also calculates heart diameter measurements during both diastole and systole from which fractional shortening and fractional area changes are calculated. Output is provided as a digital file compatible with most spreadsheet programs. Measurements are made for every heartbeat in a record increasing the statistical power of the output. We demonstrate each of the steps where user input is required and show the application of our methodology to the analysis of heart function in all three genetically tractable heart models.
Physiology, Issue 31, Drosophila, zebrafish, mouse, heart, myosin, dilated, restricted, cardiomyopathy, KCNQ, movement detection
1435
Play Button
Guide Wire Assisted Catheterization and Colored Dye Injection for Vascular Mapping of Monochorionic Twin Placentas
Authors: Eric B. Jelin, Samuel C. Schecter, Kelly D. Gonzales, Shinjiro Hirose, Hanmin Lee, Geoffrey A. Machin, Larry Rand, Vickie A. Feldstein.
Institutions: University of California, San Francisco, University of Alberta, University of California, San Francisco, University of California, San Francisco.
Monochorionic (MC) twin pregnancies are associated with significantly higher morbidity and mortality rates than dichorionic twins. Approximately 50% of MC twin pregnancies develop complications arising from the shared placenta and associated vascular connections1. Severe twin-to-twin syndrome (TTTS) is reported to account for approximately 20% of these complications2,3. Inter-twin vascular connections occur in almost all MC placentas and are related to the prognosis and outcome of these high-risk twin pregnancies. The number, size and type of connections have been implicated in the development of TTTS and other MC twin conditions. Three types of inter-twin vascular connections occur: 1) artery to vein connections (AVs) in which a branch artery carrying deoxygenated blood from one twin courses along the fetal surface of the placenta and dives into a placental cotyledon. Blood flows via a deep intraparenchymal capillary network into a draining vein that emerges at the fetal surface of the placenta and brings oxygenated blood toward the other twin. There is unidirectional flow from the twin supplying the afferent artery toward the twin receiving the efferent vein; 2) artery to artery connections (AAs) in which a branch artery from each twin meets directly on the superficial placental surface resulting in a vessel with pulsatile bidirectional flow, and 3) vein to vein connections (VVs) in which a branch vein from each twin meets directly on the superficial placental surface allowing low pressure bidirectional flow. In utero obstetric sonography with targeted Doppler interrogation has been used to identify the presence of AV and AA connections4. Prenatally detected AAs that have been confirmed by postnatal placental injection studies have been shown to be associated with an improved prognosis for both twins5. Furthermore, fetoscopic laser ablation of inter-twin vascular connections on the fetal surface of the shared placenta is now the preferred treatment for early, severe TTTS. Postnatal placental injection studies provide a valuable method to confirm the accuracy of prenatal Doppler ultrasound findings and the efficacy of fetal laser therapy6. Using colored dyes separately hand-injected into the arterial and venous circulations of each twin, the technique highlights and delineates AVs, AAs, and VVs. This definitive demonstration of MC placental vascular anatomy may then be correlated with Doppler ultrasound findings and neonatal outcome to enhance our understanding of the pathophysiology of MC twinning and its sequelae. Here we demonstrate our placental injection technique.
Medicine, Issue 55, placenta, monochorionic twins, vascular mapping, twin-to-twin transfusion syndrome (TTTS), obstetrics, fetal surgery
2837
Play Button
The Tomato/GFP-FLP/FRT Method for Live Imaging of Mosaic Adult Drosophila Photoreceptor Cells
Authors: Pierre Dourlen, Clemence Levet, Alexandre Mejat, Alexis Gambis, Bertrand Mollereau.
Institutions: Ecole Normale Supérieure de Lyon, Université Lille-Nord de France, The Rockefeller University.
The Drosophila eye is widely used as a model for studies of development and neuronal degeneration. With the powerful mitotic recombination technique, elegant genetic screens based on clonal analysis have led to the identification of signaling pathways involved in eye development and photoreceptor (PR) differentiation at larval stages. We describe here the Tomato/GFP-FLP/FRT method, which can be used for rapid clonal analysis in the eye of living adult Drosophila. Fluorescent photoreceptor cells are imaged with the cornea neutralization technique, on retinas with mosaic clones generated by flipase-mediated recombination. This method has several major advantages over classical histological sectioning of the retina: it can be used for high-throughput screening and has proved an effective method for identifying the factors regulating PR survival and function. It can be used for kinetic analyses of PR degeneration in the same living animal over several weeks, to demonstrate the requirement for specific genes for PR survival or function in the adult fly. This method is also useful for addressing cell autonomy issues in developmental mutants, such as those in which the establishment of planar cell polarity is affected.
Developmental Biology, Issue 79, Eye, Photoreceptor Cells, Genes, Developmental, neuron, visualization, degeneration, development, live imaging,Drosophila, photoreceptor, cornea neutralization, mitotic recombination
50610
Play Button
Colorimetric Paper-based Detection of Escherichia coli, Salmonella spp., and Listeria monocytogenes from Large Volumes of Agricultural Water
Authors: Bledar Bisha, Jaclyn A. Adkins, Jana C. Jokerst, Jeffrey C. Chandler, Alma Pérez-Méndez, Shannon M. Coleman, Adrian O. Sbodio, Trevor V. Suslow, Michelle D. Danyluk, Charles S. Henry, Lawrence D. Goodridge.
Institutions: University of Wyoming, Colorado State University, Colorado State University, Colorado State University, University of California, Davis, University of Florida, McGill University.
This protocol describes rapid colorimetric detection of Escherichia coli, Salmonella spp., and Listeria monocytogenes from large volumes (10 L) of agricultural waters. Here, water is filtered through sterile Modified Moore Swabs (MMS), which consist of a simple gauze filter enclosed in a plastic cartridge, to concentrate bacteria. Following filtration, non-selective or selective enrichments for the target bacteria are performed in the MMS. For colorimetric detection of the target bacteria, the enrichments are then assayed using paper-based analytical devices (µPADs) embedded with bacteria-indicative substrates. Each substrate reacts with target-indicative bacterial enzymes, generating colored products that can be detected visually (qualitative detection) on the µPAD. Alternatively, digital images of the reacted µPADs can be generated with common scanning or photographic devices and analyzed using ImageJ software, allowing for more objective and standardized interpretation of results. Although the biochemical screening procedures are designed to identify the aforementioned bacterial pathogens, in some cases enzymes produced by background microbiota or the degradation of the colorimetric substrates may produce a false positive. Therefore, confirmation using a more discriminatory diagnostic is needed. Nonetheless, this bacterial concentration and detection platform is inexpensive, sensitive (0.1 CFU/ml detection limit), easy to perform, and rapid (concentration, enrichment, and detection are performed within approximately 24 hr), justifying its use as an initial screening method for the microbiological quality of agricultural water.
Environmental Sciences, Issue 88, Paper-based analytical device (µPAD), Colorimetric enzymatic detection, Salmonella spp., Listeria monocytogenes, Escherichia coli, Modified Moore Swab (MMS), agricultural water, food safety, environmental microbiology
51414
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
50579
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
High Sensitivity 5-hydroxymethylcytosine Detection in Balb/C Brain Tissue
Authors: Theodore Davis, Romualdas Vaisvila.
Institutions: New England Biolabs.
DNA hydroxymethylation is a long known modification of DNA, but has recently become a focus in epigenetic research. Mammalian DNA is enzymatically modified at the 5th carbon position of cytosine (C) residues to 5-mC, predominately in the context of CpG dinucleotides. 5-mC is amenable to enzymatic oxidation to 5-hmC by the Tet family of enzymes, which are believed to be involved in development and disease. Currently, the biological role of 5-hmC is not fully understood, but is generating a lot of interest due to its potential as a biomarker. This is due to several groundbreaking studies identifying 5-hydroxymethylcytosine in mouse embryonic stem (ES) and neuronal cells. Research techniques, including bisulfite sequencing methods, are unable to easily distinguish between 5-mC and 5-hmC . A few protocols exist that can measure global amounts of 5-hydroxymethylcytosine in the genome, including liquid chromatography coupled with mass spectrometry analysis or thin layer chromatography of single nucleosides digested from genomic DNA. Antibodies that target 5-hydroxymethylcytosine also exist, which can be used for dot blot analysis, immunofluorescence, or precipitation of hydroxymethylated DNA, but these antibodies do not have single base resolution.In addition, resolution depends on the size of the immunoprecipitated DNA and for microarray experiments, depends on probe design. Since it is unknown exactly where 5-hydroxymethylcytosine exists in the genome or its role in epigenetic regulation, new techniques are required that can identify locus specific hydroxymethylation. The EpiMark 5-hmC and 5-mC Analysis Kit provides a solution for distinguishing between these two modifications at specific loci. The EpiMark 5-hmC and 5-mC Analysis Kit is a simple and robust method for the identification and quantitation of 5-methylcytosine and 5-hydroxymethylcytosine within a specific DNA locus. This enzymatic approach utilizes the differential methylation sensitivity of the isoschizomers MspI and HpaII in a simple 3-step protocol. Genomic DNA of interest is treated with T4-BGT, adding a glucose moeity to 5-hydroxymethylcytosine. This reaction is sequence-independent, therefore all 5-hmC will be glucosylated; unmodified or 5-mC containing DNA will not be affected. This glucosylation is then followed by restriction endonuclease digestion. MspI and HpaII recognize the same sequence (CCGG) but are sensitive to different methylation states. HpaII cleaves only a completely unmodified site: any modification (5-mC, 5-hmC or 5-ghmC) at either cytosine blocks cleavage. MspI recognizes and cleaves 5-mC and 5-hmC, but not 5-ghmC. The third part of the protocol is interrogation of the locus by PCR. As little as 20 ng of input DNA can be used. Amplification of the experimental (glucosylated and digested) and control (mock glucosylated and digested) target DNA with primers flanking a CCGG site of interest (100-200 bp) is performed. If the CpG site contains 5-hydroxymethylcytosine, a band is detected after glucosylation and digestion, but not in the non-glucosylated control reaction. Real time PCR will give an approximation of how much hydroxymethylcytosine is in this particular site. In this experiment, we will analyze the 5-hydroxymethylcytosine amount in a mouse Babl/C brain sample by end point PCR.
Neuroscience, Issue 48, EpiMark, Epigenetics, 5-hydroxymethylcytosine, 5-methylcytosine, methylation, hydroxymethylation
2661
Play Button
A Simple Stimulatory Device for Evoking Point-like Tactile Stimuli: A Searchlight for LFP to Spike Transitions
Authors: Antonio G. Zippo, Sara Nencini, Gian Carlo Caramenti, Maurizio Valente, Riccardo Storchi, Gabriele E.M. Biella.
Institutions: National Research Council, National Research Council, University of Manchester.
Current neurophysiological research has the aim to develop methodologies to investigate the signal route from neuron to neuron, namely in the transitions from spikes to Local Field Potentials (LFPs) and from LFPs to spikes. LFPs have a complex dependence on spike activity and their relation is still poorly understood1. The elucidation of these signal relations would be helpful both for clinical diagnostics (e.g. stimulation paradigms for Deep Brain Stimulation) and for a deeper comprehension of neural coding strategies in normal and pathological conditions (e.g. epilepsy, Parkinson disease, chronic pain). To this aim, one has to solve technical issues related to stimulation devices, stimulation paradigms and computational analyses. Therefore, a custom-made stimulation device was developed in order to deliver stimuli well regulated in space and time that does not incur in mechanical resonance. Subsequently, as an exemplification, a set of reliable LFP-spike relationships was extracted. The performance of the device was investigated by extracellular recordings, jointly spikes and LFP responses to the applied stimuli, from the rat Primary Somatosensory cortex. Then, by means of a multi-objective optimization strategy, a predictive model for spike occurrence based on LFPs was estimated. The application of this paradigm shows that the device is adequately suited to deliver high frequency tactile stimulation, outperforming common piezoelectric actuators. As a proof of the efficacy of the device, the following results were presented: 1) the timing and reliability of LFP responses well match the spike responses, 2) LFPs are sensitive to the stimulation history and capture not only the average response but also the trial-to-trial fluctuations in the spike activity and, finally, 3) by using the LFP signal it is possible to estimate a range of predictive models that capture different aspects of the spike activity.
Neuroscience, Issue 85, LFP, spike, tactile stimulus, Multiobjective function, Neuron, somatosensory cortex
50941
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.