JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Formal comparison of dual-parameter temporal discounting models in controls and pathological gamblers.
PLoS ONE
Temporal or delay discounting refers to the phenomenon that the value of a reward is discounted as a function of time to delivery. A range of models have been proposed that approximate the shape of the discount curve describing the relationship between subjective value and time. Recent evidence suggests that more than one free parameter may be required to accurately model human temporal discounting data. Nonetheless, many temporal discounting studies in psychiatry, psychology and neuroeconomics still apply single-parameter models, despite their oftentimes poor fit to single-subject data. Previous comparisons of temporal discounting models have either not taken model complexity into account, or have overlooked particular models. Here we apply model comparison techniques in a large sample of temporal discounting datasets using several discounting models employed in the past. Among the models examined, an exponential-power model from behavioural economics (CS model, Ebert & Prelec 2007) provided the best fit to human laboratory discounting data. Inter-parameter correlations for the winning model were moderate, whereas they were substantial for other dual-parameter models examined. Analyses of previous group and context effects on temporal discounting with the winning model provided additional theoretical insights. The CS model may be a useful tool in future psychiatry, psychology and neuroscience work on inter-temporal choice.
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Published: 08-06-2013
ABSTRACT
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
24 Related JoVE Articles!
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
52374
Play Button
A Comprehensive Protocol for Manual Segmentation of the Medial Temporal Lobe Structures
Authors: Matthew Moore, Yifan Hu, Sarah Woo, Dylan O'Hearn, Alexandru D. Iordan, Sanda Dolcos, Florin Dolcos.
Institutions: University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign, University of Illinois Urbana-Champaign.
The present paper describes a comprehensive protocol for manual tracing of the set of brain regions comprising the medial temporal lobe (MTL): amygdala, hippocampus, and the associated parahippocampal regions (perirhinal, entorhinal, and parahippocampal proper). Unlike most other tracing protocols available, typically focusing on certain MTL areas (e.g., amygdala and/or hippocampus), the integrative perspective adopted by the present tracing guidelines allows for clear localization of all MTL subregions. By integrating information from a variety of sources, including extant tracing protocols separately targeting various MTL structures, histological reports, and brain atlases, and with the complement of illustrative visual materials, the present protocol provides an accurate, intuitive, and convenient guide for understanding the MTL anatomy. The need for such tracing guidelines is also emphasized by illustrating possible differences between automatic and manual segmentation protocols. This knowledge can be applied toward research involving not only structural MRI investigations but also structural-functional colocalization and fMRI signal extraction from anatomically defined ROIs, in healthy and clinical groups alike.
Neuroscience, Issue 89, Anatomy, Segmentation, Medial Temporal Lobe, MRI, Manual Tracing, Amygdala, Hippocampus, Perirhinal Cortex, Entorhinal Cortex, Parahippocampal Cortex
50991
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
51057
Play Button
Systemic Injection of Neural Stem/Progenitor Cells in Mice with Chronic EAE
Authors: Matteo Donegà, Elena Giusto, Chiara Cossetti, Julia Schaeffer, Stefano Pluchino.
Institutions: University of Cambridge, UK, University of Cambridge, UK.
Neural stem/precursor cells (NPCs) are a promising stem cell source for transplantation approaches aiming at brain repair or restoration in regenerative neurology. This directive has arisen from the extensive evidence that brain repair is achieved after focal or systemic NPC transplantation in several preclinical models of neurological diseases. These experimental data have identified the cell delivery route as one of the main hurdles of restorative stem cell therapies for brain diseases that requires urgent assessment. Intraparenchymal stem cell grafting represents a logical approach to those pathologies characterized by isolated and accessible brain lesions such as spinal cord injuries and Parkinson's disease. Unfortunately, this principle is poorly applicable to conditions characterized by a multifocal, inflammatory and disseminated (both in time and space) nature, including multiple sclerosis (MS). As such, brain targeting by systemic NPC delivery has become a low invasive and therapeutically efficacious protocol to deliver cells to the brain and spinal cord of rodents and nonhuman primates affected by experimental chronic inflammatory damage of the central nervous system (CNS). This alternative method of cell delivery relies on the NPC pathotropism, specifically their innate capacity to (i) sense the environment via functional cell adhesion molecules and inflammatory cytokine and chemokine receptors; (ii) cross the leaking anatomical barriers after intravenous (i.v.) or intracerebroventricular (i.c.v.) injection; (iii) accumulate at the level of multiple perivascular site(s) of inflammatory brain and spinal cord damage; and (i.v.) exert remarkable tissue trophic and immune regulatory effects onto different host target cells in vivo. Here we describe the methods that we have developed for the i.v. and i.c.v. delivery of syngeneic NPCs in mice with experimental autoimmune encephalomyelitis (EAE), as model of chronic CNS inflammatory demyelination, and envisage the systemic stem cell delivery as a valuable technique for the selective targeting of the inflamed brain in regenerative neurology.
Immunology, Issue 86, Somatic neural stem/precursor cells, neurodegenerative disorders, regenerative medicine, multiple sclerosis, experimental autoimmune encephalomyelitis, systemic delivery, intravenous, intracerebroventricular
51154
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Analysis of Oxidative Stress in Zebrafish Embryos
Authors: Vera Mugoni, Annalisa Camporeale, Massimo M. Santoro.
Institutions: University of Torino, Vesalius Research Center, VIB.
High levels of reactive oxygen species (ROS) may cause a change of cellular redox state towards oxidative stress condition. This situation causes oxidation of molecules (lipid, DNA, protein) and leads to cell death. Oxidative stress also impacts the progression of several pathological conditions such as diabetes, retinopathies, neurodegeneration, and cancer. Thus, it is important to define tools to investigate oxidative stress conditions not only at the level of single cells but also in the context of whole organisms. Here, we consider the zebrafish embryo as a useful in vivo system to perform such studies and present a protocol to measure in vivo oxidative stress. Taking advantage of fluorescent ROS probes and zebrafish transgenic fluorescent lines, we develop two different methods to measure oxidative stress in vivo: i) a “whole embryo ROS-detection method” for qualitative measurement of oxidative stress and ii) a “single-cell ROS detection method” for quantitative measurements of oxidative stress. Herein, we demonstrate the efficacy of these procedures by increasing oxidative stress in tissues by oxidant agents and physiological or genetic methods. This protocol is amenable for forward genetic screens and it will help address cause-effect relationships of ROS in animal models of oxidative stress-related pathologies such as neurological disorders and cancer.
Developmental Biology, Issue 89, Danio rerio, zebrafish embryos, endothelial cells, redox state analysis, oxidative stress detection, in vivo ROS measurements, FACS (fluorescence activated cell sorter), molecular probes
51328
Play Button
Quantification of Global Diastolic Function by Kinematic Modeling-based Analysis of Transmitral Flow via the Parametrized Diastolic Filling Formalism
Authors: Sina Mossahebi, Simeng Zhu, Howard Chen, Leonid Shmuylovich, Erina Ghosh, Sándor J. Kovács.
Institutions: Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis, Washington University in St. Louis.
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2, maximum A-V pressure gradient kxo, load independent index of diastolic function, etc.) and select the aspect of physiology or pathophysiology to be quantified.
Bioengineering, Issue 91, cardiovascular physiology, ventricular mechanics, diastolic function, mathematical modeling, Doppler echocardiography, hemodynamics, biomechanics
51471
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Bladder Smooth Muscle Strip Contractility as a Method to Evaluate Lower Urinary Tract Pharmacology
Authors: F. Aura Kullmann, Stephanie L. Daugherty, William C. de Groat, Lori A. Birder.
Institutions: University of Pittsburgh School of Medicine, University of Pittsburgh School of Medicine.
We describe an in vitro method to measure bladder smooth muscle contractility, and its use for investigating physiological and pharmacological properties of the smooth muscle as well as changes induced by pathology. This method provides critical information for understanding bladder function while overcoming major methodological difficulties encountered in in vivo experiments, such as surgical and pharmacological manipulations that affect stability and survival of the preparations, the use of human tissue, and/or the use of expensive chemicals. It also provides a way to investigate the properties of each bladder component (i.e. smooth muscle, mucosa, nerves) in healthy and pathological conditions. The urinary bladder is removed from an anesthetized animal, placed in Krebs solution and cut into strips. Strips are placed into a chamber filled with warm Krebs solution. One end is attached to an isometric tension transducer to measure contraction force, the other end is attached to a fixed rod. Tissue is stimulated by directly adding compounds to the bath or by electric field stimulation electrodes that activate nerves, similar to triggering bladder contractions in vivo. We demonstrate the use of this method to evaluate spontaneous smooth muscle contractility during development and after an experimental spinal cord injury, the nature of neurotransmission (transmitters and receptors involved), factors involved in modulation of smooth muscle activity, the role of individual bladder components, and species and organ differences in response to pharmacological agents. Additionally, it could be used for investigating intracellular pathways involved in contraction and/or relaxation of the smooth muscle, drug structure-activity relationships and evaluation of transmitter release. The in vitro smooth muscle contractility method has been used extensively for over 50 years, and has provided data that significantly contributed to our understanding of bladder function as well as to pharmaceutical development of compounds currently used clinically for bladder management.
Medicine, Issue 90, Krebs, species differences, in vitro, smooth muscle contractility, neural stimulation
51807
Play Button
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Authors: Mirella Vivoli, Halina R. Novak, Jennifer A. Littlechild, Nicholas J. Harmer.
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
51809
Play Button
Flexible Colonoscopy in Mice to Evaluate the Severity of Colitis and Colorectal Tumors Using a Validated Endoscopic Scoring System
Authors: Tomohiro Kodani, Alex Rodriguez-Palacios, Daniele Corridoni, Loris Lopetuso, Luca Di Martino, Brian Marks, James Pizarro, Theresa Pizarro, Amitabh Chak, Fabio Cominelli.
Institutions: Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland, Case Western Reserve University School of Medicine, Cleveland.
The use of modern endoscopy for research purposes has greatly facilitated our understanding of gastrointestinal pathologies. In particular, experimental endoscopy has been highly useful for studies that require repeated assessments in a single laboratory animal, such as those evaluating mechanisms of chronic inflammatory bowel disease and the progression of colorectal cancer. However, the methods used across studies are highly variable. At least three endoscopic scoring systems have been published for murine colitis and published protocols for the assessment of colorectal tumors fail to address the presence of concomitant colonic inflammation. This study develops and validates a reproducible endoscopic scoring system that integrates evaluation of both inflammation and tumors simultaneously. This novel scoring system has three major components: 1) assessment of the extent and severity of colorectal inflammation (based on perianal findings, transparency of the wall, mucosal bleeding, and focal lesions), 2) quantitative recording of tumor lesions (grid map and bar graph), and 3) numerical sorting of clinical cases by their pathological and research relevance based on decimal units with assigned categories of observed lesions and endoscopic complications (decimal identifiers). The video and manuscript presented herein were prepared, following IACUC-approved protocols, to allow investigators to score their own experimental mice using a well-validated and highly reproducible endoscopic methodology, with the system option to differentiate distal from proximal endoscopic colitis (D-PECS).
Medicine, Issue 80, Crohn's disease, ulcerative colitis, colon cancer, Clostridium difficile, SAMP mice, DSS/AOM-colitis, decimal scoring identifier
50843
Play Button
Waste Water Derived Electroactive Microbial Biofilms: Growth, Maintenance, and Basic Characterization
Authors: Carla Gimkiewicz, Falk Harnisch.
Institutions: UFZ - Helmholtz-Centre for Environmental Research.
The growth of anodic electroactive microbial biofilms from waste water inocula in a fed-batch reactor is demonstrated using a three-electrode setup controlled by a potentiostat. Thereby the use of potentiostats allows an exact adjustment of the electrode potential and ensures reproducible microbial culturing conditions. During growth the current production is monitored using chronoamperometry (CA). Based on these data the maximum current density (jmax) and the coulombic efficiency (CE) are discussed as measures for characterization of the bioelectrocatalytic performance. Cyclic voltammetry (CV), a nondestructive, i.e. noninvasive, method, is used to study the extracellular electron transfer (EET) of electroactive bacteria. CV measurements are performed on anodic biofilm electrodes in the presence of the microbial substrate, i.e. turnover conditions, and in the absence of the substrate, i.e. nonturnover conditions, using different scan rates. Subsequently, data analysis is exemplified and fundamental thermodynamic parameters of the microbial EET are derived and explained: peak potential (Ep), peak current density (jp), formal potential (Ef) and peak separation (ΔEp). Additionally the limits of the method and the state-of the art data analysis are addressed. Thereby this video-article shall provide a guide for the basic experimental steps and the fundamental data analysis.
Environmental Sciences, Issue 82, Electrochemistry, Microbial fuel cell, microbial bioelectrochemical system, cyclic voltammetry, electroactive bacteria, microbial bioelectrochemistry, bioelectrocatalysis
50800
Play Button
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Authors: Amir Karniel, Guy Avraham, Bat-Chen Peles, Shelly Levy-Tzedek, Ilana Nisky.
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
2492
Play Button
Targeted Training of Ultrasonic Vocalizations in Aged and Parkinsonian Rats
Authors: Aaron M. Johnson, Emerald J. Doll, Laura M. Grant, Lauren Ringel, Jaime N. Shier, Michelle R. Ciucci.
Institutions: University of Wisconsin, University of Wisconsin.
Voice deficits are a common complication of both Parkinson disease (PD) and aging; they can significantly diminish quality of life by impacting communication abilities. 1, 2 Targeted training (speech/voice therapy) can improve specific voice deficits,3, 4 although the underlying mechanisms of behavioral interventions are not well understood. Systematic investigation of voice deficits and therapy should consider many factors that are difficult to control in humans, such as age, home environment, age post-onset of disease, severity of disease, and medications. The method presented here uses an animal model of vocalization that allows for systematic study of how underlying sensorimotor mechanisms change with targeted voice training. The ultrasonic recording and analysis procedures outlined in this protocol are applicable to any investigation of rodent ultrasonic vocalizations. The ultrasonic vocalizations of rodents are emerging as a valuable model to investigate the neural substrates of behavior.5-8 Both rodent and human vocalizations carry semiotic value and are produced by modifying an egressive airflow with a laryngeal constriction.9, 10 Thus, rodent vocalizations may be a useful model to study voice deficits in a sensorimotor context. Further, rat models allow us to study the neurobiological underpinnings of recovery from deficits with targeted training. To model PD we use Long-Evans rats (Charles River Laboratories International, Inc.) and induce parkinsonism by a unilateral infusion of 7 μg of 6-hydroxydopamine (6-OHDA) into the medial forebrain bundle which causes moderate to severe degeneration of presynaptic striatal neurons (for details see Ciucci, 2010).11, 12 For our aging model we use the Fischer 344/Brown Norway F1 (National Institute on Aging). Our primary method for eliciting vocalizations is to expose sexually-experienced male rats to sexually receptive female rats. When the male becomes interested in the female, the female is removed and the male continues to vocalize. By rewarding complex vocalizations with food or water, both the number of complex vocalizations and the rate of vocalizations can be increased (Figure 1). An ultrasonic microphone mounted above the male's home cage records the vocalizations. Recording begins after the female rat is removed to isolate the male calls. Vocalizations can be viewed in real time for training or recorded and analyzed offline. By recording and acoustically analyzing vocalizations before and after vocal training, the effects of disease and restoration of normal function with training can be assessed. This model also allows us to relate the observed behavioral (vocal) improvements to changes in the brain and neuromuscular system.
Neuroscience, Issue 54, ultrasonic vocalization, rat, aging, Parkinson disease, exercise, 6-hydroxydopamine, voice disorders, voice therapy
2835
Play Button
Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods
Authors: Ifat Levy, Lior Rosenberg Belmaker, Kirk Manson, Agnieszka Tymula, Paul W. Glimcher.
Institutions: Yale School of Medicine, Yale School of Medicine, New York University , New York University , New York University .
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity1,2, the degree of those aversions vary substantially across individuals, such that the subjective value of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method3 to assess the neural representation of the subjective values of risky and ambiguous options4. This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations. In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Neuroscience, Issue 67, Medicine, Molecular Biology, fMRI, magnetic resonance imaging, decision-making, value, uncertainty, risk, ambiguity
3724
Play Button
Extinction Training During the Reconsolidation Window Prevents Recovery of Fear
Authors: Daniela Schiller, Candace M. Raio, Elizabeth A. Phelps.
Institutions: Mt. Sinai School of Medicine, New York University , New York University .
Fear is maladaptive when it persists long after circumstances have become safe. It is therefore crucial to develop an approach that persistently prevents the return of fear. Pavlovian fear-conditioning paradigms are commonly employed to create a controlled, novel fear association in the laboratory. After pairing an innocuous stimulus (conditioned stimulus, CS) with an aversive outcome (unconditioned stimulus, US) we can elicit a fear response (conditioned response, or CR) by presenting just the stimulus alone1,2 . Once fear is acquired, it can be diminished using extinction training, whereby the conditioned stimulus is repeatedly presented without the aversive outcome until fear is no longer expressed3. This inhibitory learning creates a new, safe representation for the CS, which competes for expression with the original fear memory4. Although extinction is effective at inhibiting fear, it is not permanent. Fear can spontaneously recover with the passage of time. Exposure to stress or returning to the context of initial learning can also cause fear to resurface3,4. Our protocol addresses the transient nature of extinction by targeting the reconsolidation window to modify emotional memory in a more permanent manner. Ample evidence suggests that reactivating a consolidated memory returns it to a labile state, during which the memory is again susceptible to interference5-9. This window of opportunity appears to open shortly after reactivation and close approximately 6hrs later5,11,16, although this may vary depending on the strength and age of the memory15. By allowing new information to incorporate into the original memory trace, this memory may be updated as it reconsolidates10,11. Studies involving non-human animals have successfully blocked the expression of fear memory by introducing pharmacological manipulations within the reconsolidation window, however, most agents used are either toxic to humans or show equivocal effects when used in human studies12-14. Our protocol addresses these challenges by offering an effective, yet non-invasive, behavioral manipulation that is safe for humans. By prompting fear memory retrieval prior to extinction, we essentially trigger the reconsolidation process, allowing new safety information (i.e., extinction) to be incorporated while the fear memory is still susceptible to interference. A recent study employing this behavioral manipulation in rats has successfully blocked fear memory using these temporal parameters11. Additional studies in humans have demonstrated that introducing new information after the retrieval of previously consolidated motor16, episodic17, or declarative18 memories leads to interference with the original memory trace14. We outline below a novel protocol used to block fear recovery in humans.
Neuroscience, Issue 66, Medicine, Psychology, Physiology, Fear conditioning, extinction, reconsolidation, emotional memory, spontaneous recovery, skin conductance response
3893
Play Button
Recording Human Electrocorticographic (ECoG) Signals for Neuroscientific Research and Real-time Functional Cortical Mapping
Authors: N. Jeremy Hill, Disha Gupta, Peter Brunner, Aysegul Gunduz, Matthew A. Adamo, Anthony Ritaccio, Gerwin Schalk.
Institutions: New York State Department of Health, Albany Medical College, Albany Medical College, Washington University, Rensselaer Polytechnic Institute, State University of New York at Albany, University of Texas at El Paso .
Neuroimaging studies of human cognitive, sensory, and motor processes are usually based on noninvasive techniques such as electroencephalography (EEG), magnetoencephalography or functional magnetic-resonance imaging. These techniques have either inherently low temporal or low spatial resolution, and suffer from low signal-to-noise ratio and/or poor high-frequency sensitivity. Thus, they are suboptimal for exploring the short-lived spatio-temporal dynamics of many of the underlying brain processes. In contrast, the invasive technique of electrocorticography (ECoG) provides brain signals that have an exceptionally high signal-to-noise ratio, less susceptibility to artifacts than EEG, and a high spatial and temporal resolution (i.e., <1 cm/<1 millisecond, respectively). ECoG involves measurement of electrical brain signals using electrodes that are implanted subdurally on the surface of the brain. Recent studies have shown that ECoG amplitudes in certain frequency bands carry substantial information about task-related activity, such as motor execution and planning1, auditory processing2 and visual-spatial attention3. Most of this information is captured in the high gamma range (around 70-110 Hz). Thus, gamma activity has been proposed as a robust and general indicator of local cortical function1-5. ECoG can also reveal functional connectivity and resolve finer task-related spatial-temporal dynamics, thereby advancing our understanding of large-scale cortical processes. It has especially proven useful for advancing brain-computer interfacing (BCI) technology for decoding a user's intentions to enhance or improve communication6 and control7. Nevertheless, human ECoG data are often hard to obtain because of the risks and limitations of the invasive procedures involved, and the need to record within the constraints of clinical settings. Still, clinical monitoring to localize epileptic foci offers a unique and valuable opportunity to collect human ECoG data. We describe our methods for collecting recording ECoG, and demonstrate how to use these signals for important real-time applications such as clinical mapping and brain-computer interfacing. Our example uses the BCI2000 software platform8,9 and the SIGFRIED10 method, an application for real-time mapping of brain functions. This procedure yields information that clinicians can subsequently use to guide the complex and laborious process of functional mapping by electrical stimulation. Prerequisites and Planning: Patients with drug-resistant partial epilepsy may be candidates for resective surgery of an epileptic focus to minimize the frequency of seizures. Prior to resection, the patients undergo monitoring using subdural electrodes for two purposes: first, to localize the epileptic focus, and second, to identify nearby critical brain areas (i.e., eloquent cortex) where resection could result in long-term functional deficits. To implant electrodes, a craniotomy is performed to open the skull. Then, electrode grids and/or strips are placed on the cortex, usually beneath the dura. A typical grid has a set of 8 x 8 platinum-iridium electrodes of 4 mm diameter (2.3 mm exposed surface) embedded in silicon with an inter-electrode distance of 1cm. A strip typically contains 4 or 6 such electrodes in a single line. The locations for these grids/strips are planned by a team of neurologists and neurosurgeons, and are based on previous EEG monitoring, on a structural MRI of the patient's brain, and on relevant factors of the patient's history. Continuous recording over a period of 5-12 days serves to localize epileptic foci, and electrical stimulation via the implanted electrodes allows clinicians to map eloquent cortex. At the end of the monitoring period, explantation of the electrodes and therapeutic resection are performed together in one procedure. In addition to its primary clinical purpose, invasive monitoring also provides a unique opportunity to acquire human ECoG data for neuroscientific research. The decision to include a prospective patient in the research is based on the planned location of their electrodes, on the patient's performance scores on neuropsychological assessments, and on their informed consent, which is predicated on their understanding that participation in research is optional and is not related to their treatment. As with all research involving human subjects, the research protocol must be approved by the hospital's institutional review board. The decision to perform individual experimental tasks is made day-by-day, and is contingent on the patient's endurance and willingness to participate. Some or all of the experiments may be prevented by problems with the clinical state of the patient, such as post-operative facial swelling, temporary aphasia, frequent seizures, post-ictal fatigue and confusion, and more general pain or discomfort. At the Epilepsy Monitoring Unit at Albany Medical Center in Albany, New York, clinical monitoring is implemented around the clock using a 192-channel Nihon-Kohden Neurofax monitoring system. Research recordings are made in collaboration with the Wadsworth Center of the New York State Department of Health in Albany. Signals from the ECoG electrodes are fed simultaneously to the research and the clinical systems via splitter connectors. To ensure that the clinical and research systems do not interfere with each other, the two systems typically use separate grounds. In fact, an epidural strip of electrodes is sometimes implanted to provide a ground for the clinical system. Whether research or clinical recording system, the grounding electrode is chosen to be distant from the predicted epileptic focus and from cortical areas of interest for the research. Our research system consists of eight synchronized 16-channel g.USBamp amplifier/digitizer units (g.tec, Graz, Austria). These were chosen because they are safety-rated and FDA-approved for invasive recordings, they have a very low noise-floor in the high-frequency range in which the signals of interest are found, and they come with an SDK that allows them to be integrated with custom-written research software. In order to capture the high-gamma signal accurately, we acquire signals at 1200Hz sampling rate-considerably higher than that of the typical EEG experiment or that of many clinical monitoring systems. A built-in low-pass filter automatically prevents aliasing of signals higher than the digitizer can capture. The patient's eye gaze is tracked using a monitor with a built-in Tobii T-60 eye-tracking system (Tobii Tech., Stockholm, Sweden). Additional accessories such as joystick, bluetooth Wiimote (Nintendo Co.), data-glove (5th Dimension Technologies), keyboard, microphone, headphones, or video camera are connected depending on the requirements of the particular experiment. Data collection, stimulus presentation, synchronization with the different input/output accessories, and real-time analysis and visualization are accomplished using our BCI2000 software8,9. BCI2000 is a freely available general-purpose software system for real-time biosignal data acquisition, processing and feedback. It includes an array of pre-built modules that can be flexibly configured for many different purposes, and that can be extended by researchers' own code in C++, MATLAB or Python. BCI2000 consists of four modules that communicate with each other via a network-capable protocol: a Source module that handles the acquisition of brain signals from one of 19 different hardware systems from different manufacturers; a Signal Processing module that extracts relevant ECoG features and translates them into output signals; an Application module that delivers stimuli and feedback to the subject; and the Operator module that provides a graphical interface to the investigator. A number of different experiments may be conducted with any given patient. The priority of experiments will be determined by the location of the particular patient's electrodes. However, we usually begin our experimentation using the SIGFRIED (SIGnal modeling For Realtime Identification and Event Detection) mapping method, which detects and displays significant task-related activity in real time. The resulting functional map allows us to further tailor subsequent experimental protocols and may also prove as a useful starting point for traditional mapping by electrocortical stimulation (ECS). Although ECS mapping remains the gold standard for predicting the clinical outcome of resection, the process of ECS mapping is time consuming and also has other problems, such as after-discharges or seizures. Thus, a passive functional mapping technique may prove valuable in providing an initial estimate of the locus of eloquent cortex, which may then be confirmed and refined by ECS. The results from our passive SIGFRIED mapping technique have been shown to exhibit substantial concurrence with the results derived using ECS mapping10. The protocol described in this paper establishes a general methodology for gathering human ECoG data, before proceeding to illustrate how experiments can be initiated using the BCI2000 software platform. Finally, as a specific example, we describe how to perform passive functional mapping using the BCI2000-based SIGFRIED system.
Neuroscience, Issue 64, electrocorticography, brain-computer interfacing, functional brain mapping, SIGFRIED, BCI2000, epilepsy monitoring, magnetic resonance imaging, MRI
3993
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
How to Detect Amygdala Activity with Magnetoencephalography using Source Imaging
Authors: Nicholas L. Balderston, Douglas H. Schultz, Sylvain Baillet, Fred J. Helmstetter.
Institutions: University of Wisconsin-Milwaukee, Montreal Neurological Institute, McGill University, Medical College of Wisconsin .
In trace fear conditioning a conditional stimulus (CS) predicts the occurrence of the unconditional stimulus (UCS), which is presented after a brief stimulus free period (trace interval)1. Because the CS and UCS do not co-occur temporally, the subject must maintain a representation of that CS during the trace interval. In humans, this type of learning requires awareness of the stimulus contingencies in order to bridge the trace interval2-4. However when a face is used as a CS, subjects can implicitly learn to fear the face even in the absence of explicit awareness*. This suggests that there may be additional neural mechanisms capable of maintaining certain types of "biologically-relevant" stimuli during a brief trace interval. Given that the amygdala is involved in trace conditioning, and is sensitive to faces, it is possible that this structure can maintain a representation of a face CS during a brief trace interval. It is challenging to understand how the brain can associate an unperceived face with an aversive outcome, even though the two stimuli are separated in time. Furthermore investigations of this phenomenon are made difficult by two specific challenges. First, it is difficult to manipulate the subject's awareness of the visual stimuli. One common way to manipulate visual awareness is to use backward masking. In backward masking, a target stimulus is briefly presented (< 30 msec) and immediately followed by a presentation of an overlapping masking stimulus5. The presentation of the mask renders the target invisible6-8. Second, masking requires very rapid and precise timing making it difficult to investigate neural responses evoked by masked stimuli using many common approaches. Blood-oxygenation level dependent (BOLD) responses resolve at a timescale too slow for this type of methodology, and real time recording techniques like electroencephalography (EEG) and magnetoencephalography (MEG) have difficulties recovering signal from deep sources. However, there have been recent advances in the methods used to localize the neural sources of the MEG signal9-11. By collecting high-resolution MRI images of the subject's brain, it is possible to create a source model based on individual neural anatomy. Using this model to "image" the sources of the MEG signal, it is possible to recover signal from deep subcortical structures, like the amygdala and the hippocampus*.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Medicine, Physiology, Anatomy, Psychology, Amygdala, Magnetoencephalography, Fear, awareness, masking, source imaging, conditional stimulus, unconditional stimulus, hippocampus, brain, magnetic resonance imaging, MRI, fMRI, imaging, clinical techniques
50212
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
1668
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.