Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
22 Related JoVE Articles!
A Procedure to Study the Effect of Prolonged Food Restriction on Heroin Seeking in Abstinent Rats
Institutions: Concordia University.
In human drug addicts, exposure to drug-associated cues or environments that were previously associated with drug taking can trigger relapse during abstinence. Moreover, various environmental challenges can exacerbate this effect, as well as increase ongoing drug intake.
The procedure we describe here highlights the impact of a common environmental challenge, food restriction, on drug craving that is expressed as an augmentation of drug seeking in abstinent rats.
Rats are implanted with chronic intravenous i.v. catheters, and then trained to press a lever for
i.v. heroin over a period of 10-12 days. Following the heroin self-administration phase the rats are removed from the operant conditioning chambers and housed in the animal care facility for a period of at least 14 days. While one group is maintained under unrestricted access to food (sated group), a second group (FDR group) is exposed to a mild food restriction regimen that results in their body weights maintained at 90% of their nonrestricted body weight. On day 14 of food restriction the rats are transferred back to the drug-training environment, and a drug-seeking test is run under extinction conditions (i.e
. lever presses do not result in heroin delivery).
The procedure presented here results in a highly robust augmentation of heroin seeking on test day in the food restricted rats. In addition, compared to the acute food deprivation manipulations we have used before, the current procedure is a more clinically relevant model for the impact of caloric restriction on drug seeking. Moreover, it might be closer to the human condition as the rats are not required to go through an extinction-training phase before the drug-seeking test, which is an integral component of the popular reinstatement procedure.
Behavior, Issue 81, Animal, Drug-Seeking Behavior, Fasting, Substance-Related Disorders, behavioral neuroscience, self-administration, intravenous, drugs, relapse, food restriction
Making Sense of Listening: The IMAP Test Battery
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated.
When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli.
We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Assessing Functional Performance in the Mdx Mouse Model
Institutions: Leiden University Medical Center.
Duchenne muscular dystrophy (DMD) is a severe and progressive muscle wasting disorder for which no cure is available. Nevertheless, several potential pharmaceutical compounds and gene therapy approaches have progressed into clinical trials. With improvement in muscle function being the most important end point in these trials, a lot of emphasis has been placed on setting up reliable, reproducible, and easy to perform functional tests to pre clinically assess muscle function, strength, condition, and coordination in the mdx
mouse model for DMD. Both invasive and noninvasive tests are available. Tests that do not exacerbate the disease can be used to determine the natural history of the disease and the effects of therapeutic interventions (e.g
. forelimb grip strength test, two different hanging tests using either a wire or a grid and rotarod running). Alternatively, forced treadmill running can be used to enhance disease progression and/or assess protective effects of therapeutic interventions on disease pathology. We here describe how to perform these most commonly used functional tests in a reliable and reproducible manner. Using these protocols based on standard operating procedures enables comparison of data between different laboratories.
Behavior, Issue 85, Duchenne muscular dystrophy, neuromuscular disorders, outcome measures, functional testing, mouse model, grip strength, hanging test wire, hanging test grid, rotarod running, treadmill running
Bottom-up and Shotgun Proteomics to Identify a Comprehensive Cochlear Proteome
Institutions: University of South Florida.
Proteomics is a commonly used approach that can provide insights into complex biological systems. The cochlear sensory epithelium contains receptors that transduce the mechanical energy of sound into an electro-chemical energy processed by the peripheral and central nervous systems. Several proteomic techniques have been developed to study the cochlear inner ear, such as two-dimensional difference gel electrophoresis (2D-DIGE), antibody microarray, and mass spectrometry (MS). MS is the most comprehensive and versatile tool in proteomics and in conjunction with separation methods can provide an in-depth proteome of biological samples. Separation methods combined with MS has the ability to enrich protein samples, detect low molecular weight and hydrophobic proteins, and identify low abundant proteins by reducing the proteome dynamic range. Different digestion strategies can be applied to whole lysate or to fractionated protein lysate to enhance peptide and protein sequence coverage. Utilization of different separation techniques, including strong cation exchange (SCX), reversed-phase (RP), and gel-eluted liquid fraction entrapment electrophoresis (GELFrEE) can be applied to reduce sample complexity prior to MS analysis for protein identification.
Biochemistry, Issue 85, Cochlear, chromatography, LC-MS/MS, mass spectrometry, Proteomics, sensory epithelium
A Manual Small Molecule Screen Approaching High-throughput Using Zebrafish Embryos
Institutions: University of Notre Dame.
Zebrafish have become a widely used model organism to investigate the mechanisms that underlie developmental biology and to study human disease pathology due to their considerable degree of genetic conservation with humans. Chemical genetics entails testing the effect that small molecules have on a biological process and is becoming a popular translational research method to identify therapeutic compounds. Zebrafish are specifically appealing to use for chemical genetics because of their ability to produce large clutches of transparent embryos, which are externally fertilized. Furthermore, zebrafish embryos can be easily drug treated by the simple addition of a compound to the embryo media. Using whole-mount in situ
hybridization (WISH), mRNA expression can be clearly visualized within zebrafish embryos. Together, using chemical genetics and WISH, the zebrafish becomes a potent whole organism context in which to determine the cellular and physiological effects of small molecules. Innovative advances have been made in technologies that utilize machine-based screening procedures, however for many labs such options are not accessible or remain cost-prohibitive. The protocol described here explains how to execute a manual high-throughput chemical genetic screen that requires basic resources and can be accomplished by a single individual or small team in an efficient period of time. Thus, this protocol provides a feasible strategy that can be implemented by research groups to perform chemical genetics in zebrafish, which can be useful for gaining fundamental insights into developmental processes, disease mechanisms, and to identify novel compounds and signaling pathways that have medically relevant applications.
Developmental Biology, Issue 93, zebrafish, chemical genetics, chemical screen, in vivo small molecule screen, drug discovery, whole mount in situ hybridization (WISH), high-throughput screening (HTS), high-content screening (HCS)
Multi-step Preparation Technique to Recover Multiple Metabolite Compound Classes for In-depth and Informative Metabolomic Analysis
Institutions: National Jewish Health, University of Colorado Denver.
Metabolomics is an emerging field which enables profiling of samples from living organisms in order to obtain insight into biological processes. A vital aspect of metabolomics is sample preparation whereby inconsistent techniques generate unreliable results. This technique encompasses protein precipitation, liquid-liquid extraction, and solid-phase extraction as a means of fractionating metabolites into four distinct classes. Improved enrichment of low abundance molecules with a resulting increase in sensitivity is obtained, and ultimately results in more confident identification of molecules. This technique has been applied to plasma, bronchoalveolar lavage fluid, and cerebrospinal fluid samples with volumes as low as 50 µl. Samples can be used for multiple downstream applications; for example, the pellet resulting from protein precipitation can be stored for later analysis. The supernatant from that step undergoes liquid-liquid extraction using water and strong organic solvent to separate the hydrophilic and hydrophobic compounds. Once fractionated, the hydrophilic layer can be processed for later analysis or discarded if not needed. The hydrophobic fraction is further treated with a series of solvents during three solid-phase extraction steps to separate it into fatty acids, neutral lipids, and phospholipids. This allows the technician the flexibility to choose which class of compounds is preferred for analysis. It also aids in more reliable metabolite identification since some knowledge of chemical class exists.
Bioengineering, Issue 89, plasma, chemistry techniques, analytical, solid phase extraction, mass spectrometry, metabolomics, fluids and secretions, profiling, small molecules, lipids, liquid chromatography, liquid-liquid extraction, cerebrospinal fluid, bronchoalveolar lavage fluid
Introduction to Solid Supported Membrane Based Electrophysiology
Institutions: Max Planck Institute of Biophysics, Goethe University Frankfurt.
The electrophysiological method we present is based on a solid supported membrane (SSM) composed of an octadecanethiol layer chemisorbed on a gold coated sensor chip and a phosphatidylcholine monolayer on top. This assembly is mounted into a cuvette system containing the reference electrode, a chlorinated silver wire.
After adsorption of membrane fragments or proteoliposomes containing the membrane protein of interest, a fast solution exchange is used to induce the transport activity of the membrane protein. In the single solution exchange protocol two solutions, one non-activating and one activating solution, are needed. The flow is controlled by pressurized air and a valve and tubing system within a faraday cage.
The kinetics of the electrogenic transport activity is obtained via capacitive coupling between the SSM and the proteoliposomes or membrane fragments. The method, therefore, yields only transient currents. The peak current represents the stationary transport activity. The time dependent transporter currents can be reconstructed by circuit analysis.
This method is especially suited for prokaryotic transporters or eukaryotic transporters from intracellular membranes, which cannot be investigated by patch clamp or voltage clamp methods.
Biochemistry, Issue 75, Biophysics, Molecular Biology, Cellular Biology, Physiology, Proteins, Membrane Lipids, Membrane Transport Proteins, Kinetics, Electrophysiology, solid supported membrane, SSM, membrane transporter, lactose permease, lacY, capacitive coupling, solution exchange, model membrane, membrane protein, transporter, kinetics, transport mechanism
High-throughput Screening for Small-molecule Modulators of Inward Rectifier Potassium Channels
Institutions: Vanderbilt University School of Medicine, Vanderbilt University School of Medicine, Vanderbilt University School of Medicine.
Specific members of the inward rectifier potassium (Kir) channel family are postulated drug targets for a variety of disorders, including hypertension, atrial fibrillation, and pain1,2
. For the most part, however, progress toward understanding their therapeutic potential or even basic physiological functions has been slowed by the lack of good pharmacological tools. Indeed, the molecular pharmacology of the inward rectifier family has lagged far behind that of the S4 superfamily of voltage-gated potassium (Kv) channels, for which a number of nanomolar-affinity and highly selective peptide toxin modulators have been discovered3
. The bee venom toxin tertiapin and its derivatives are potent inhibitors of Kir1.1 and Kir3 channels4,5
, but peptides are of limited use therapeutically as well as experimentally due to their antigenic properties and poor bioavailability, metabolic stability and tissue penetrance. The development of potent and selective small-molecule probes with improved pharmacological properties will be a key to fully understanding the physiology and therapeutic potential of Kir channels.
The Molecular Libraries Probes Production Center Network (MLPCN) supported by the National Institutes of Health (NIH) Common Fund has created opportunities for academic scientists to initiate probe discovery campaigns for molecular targets and signaling pathways in need of better pharmacology6
. The MLPCN provides researchers access to industry-scale screening centers and medicinal chemistry and informatics support to develop small-molecule probes to elucidate the function of genes and gene networks. The critical step in gaining entry to the MLPCN is the development of a robust target- or pathway-specific assay that is amenable for high-throughput screening (HTS).
Here, we describe how to develop a fluorescence-based thallium (Tl+
) flux assay of Kir channel function for high-throughput compound screening7,8,9,10
.The assay is based on the permeability of the K+
channel pore to the K+
. A commercially available fluorescent Tl+
reporter dye is used to detect transmembrane flux of Tl+
through the pore. There are at least three commercially available dyes that are suitable for Tl+
flux assays: BTC, FluoZin-2, and FluxOR7,8
. This protocol describes assay development using FluoZin-2. Although originally developed and marketed as a zinc indicator, FluoZin-2 exhibits a robust and dose-dependent increase in fluorescence emission upon Tl+
binding. We began working with FluoZin-2 before FluxOR was available7,8
and have continued to do so9,10
. However, the steps in assay development are essentially identical for all three dyes, and users should determine which dye is most appropriate for their specific needs. We also discuss the assay's performance benchmarks that must be reached to be considered for entry to the MLPCN. Since Tl+
readily permeates most K+
channels, the assay should be adaptable to most K+
Biochemistry, Issue 71, Molecular Biology, Chemistry, Cellular Biology, Chemical Biology, Pharmacology, Molecular Pharmacology, Potassium channels, drug discovery, drug screening, high throughput, small molecules, fluorescence, thallium flux, checkerboard analysis, DMSO, cell lines, screen, assay, assay development
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
The Use of Reverse Phase Protein Arrays (RPPA) to Explore Protein Expression Variation within Individual Renal Cell Cancers
Institutions: University of Edinburgh, University of St Andrews, University of Edinburgh, University of Edinburgh, Western General Hospital, University of Edinburgh, Queen Mary University of London.
Currently there is no curative treatment for metastatic clear cell renal cell cancer, the commonest variant of the disease. A key factor in this treatment resistance is thought to be the molecular complexity of the disease 1
. Targeted therapy such as the tyrosine kinase inhibitor (TKI)-sunitinib have been utilized, but only 40% of patients will respond, with the overwhelming majority of these patients relapsing within 1 year 2
. As such the question of intrinsic and acquired resistance in renal cell cancer patients is highly relevant 3
In order to study resistance to TKIs, with the ultimate goal of developing effective, personalized treatments, sequential tissue after a specific period of targeted therapy is required, an approach which had proved successful in chronic myeloid leukaemia 4
. However the application of such a strategy in renal cell carcinoma is complicated by the high level of both inter- and intratumoral heterogeneity, which is a feature of renal cell carcinoma5,6
as well as other solid tumors 7
. Intertumoral heterogeneity due to transcriptomic and genetic differences is well established even in patients with similar presentation, stage and grade of tumor. In addition it is clear that there is great morphological (intratumoral) heterogeneity in RCC, which is likely to represent even greater molecular heterogeneity. Detailed mapping and categorization of RCC tumors by combined morphological analysis and Fuhrman grading allows the selection of representative areas for proteomic analysis.
Protein based analysis of RCC8
is attractive due to its widespread availability in pathology laboratories; however, its application can be problematic due to the limited availability of specific antibodies 9
. Due to the dot blot nature of the Reverse Phase Protein Arrays (RPPA), antibody specificity must be pre-validated; as such strict quality control of antibodies used is of paramount importance. Despite this limitation the dot blot format does allow assay miniaturization, allowing for the printing of hundreds of samples onto a single nitrocellulose slide. Printed slides can then be analyzed in a similar fashion to Western analysis with the use of target specific primary antibodies and fluorescently labelled secondary antibodies, allowing for multiplexing. Differential protein expression across all the samples on a slide can then be analyzed simultaneously by comparing the relative level of fluorescence in a more cost-effective and high-throughput manner.
Cancer Biology, Issue 71, Bioengineering, Medicine, Biomedical Engineering, Cellular Biology, Molecular Biology, Genetics, Pathology, Oncology, Proteins, Early Detection of Cancer, Translational Medical Research, RPPA, RCC, Heterogeneity, Proteomics, Tumor Grade, intertumoral, tumor, metastatic, carcinoma, renal cancer, clear cell renal cell cancer, cancer, assay
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Institutions: Rensselaer Polytechnic Institute.
has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae
has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e
., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g
., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori
, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources.
Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e
., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
The ChroP Approach Combines ChIP and Mass Spectrometry to Dissect Locus-specific Proteomic Landscapes of Chromatin
Institutions: European Institute of Oncology.
Chromatin is a highly dynamic nucleoprotein complex made of DNA and proteins that controls various DNA-dependent processes. Chromatin structure and function at specific regions is regulated by the local enrichment of histone post-translational modifications (hPTMs) and variants, chromatin-binding proteins, including transcription factors, and DNA methylation. The proteomic characterization of chromatin composition at distinct functional regions has been so far hampered by the lack of efficient protocols to enrich such domains at the appropriate purity and amount for the subsequent in-depth analysis by Mass Spectrometry (MS). We describe here a newly designed chromatin proteomics strategy, named ChroP (Chromatin Proteomics
), whereby a preparative chromatin immunoprecipitation is used to isolate distinct chromatin regions whose features, in terms of hPTMs, variants and co-associated non-histonic proteins, are analyzed by MS. We illustrate here the setting up of ChroP for the enrichment and analysis of transcriptionally silent heterochromatic regions, marked by the presence of tri-methylation of lysine 9 on histone H3. The results achieved demonstrate the potential of ChroP
in thoroughly characterizing the heterochromatin proteome and prove it as a powerful analytical strategy for understanding how the distinct protein determinants of chromatin interact and synergize to establish locus-specific structural and functional configurations.
Biochemistry, Issue 86, chromatin, histone post-translational modifications (hPTMs), epigenetics, mass spectrometry, proteomics, SILAC, chromatin immunoprecipitation , histone variants, chromatome, hPTMs cross-talks
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
The Successive Alleys Test of Anxiety in Mice and Rats
Institutions: University of Oxford.
The plus-maze was derived from the early work of Montgomery. He observed that rats tended to avoid the open arms of a maze, preferring the enclosed ones. Handley, Mithani and File et al.
performed the first studies on the plus-maze design we use today, and in 1987 Lister published a design for use with mice.
Time spent on, and entries into, the open arms are an index of anxiety; the lower these indices, the more anxious the mouse is. Alternatively, a mouse that spends most of its time in the closed arms is classed as anxious.
One of the problems of the plus-maze is that, while time spent on, and entries into, the open arms is a fairly unambiguous measure of anxiety, time in the central area is more difficult to interpret, although time spent here has been classified as “decision making”. In many tests central area time is a considerable part of the total test time.
Shepherd et al.
produced an ingenious design to eliminate the central area, which they called the “zero maze”. However, although used by several groups, it has never been as widely adopted as the plus-maze.
In the present article I describe a modification of the plus-maze design that not only eliminates the central area but also incorporates elements from other anxiety tests, such as the light-dark box and emergence tests. It is a linear series of four alleys, each having increasing anxiogenic properties. It has given similar results to the plus-maze in general. Although it may not be more sensitive than the plus-maze (more data is needed before a firm conclusion can be reached on this point), it provides a useful confirmation of plus-maze results which would be useful when, for example, only a single example of a mutant mouse was available, as, for example, in ENU-based mutagenesis programs.
Behavior, Issue 76, Neuroscience, Neurobiology, Medicine, Psychology, Mice, rats, anxiety-like behaviour, plus-maze, behaviour, prefrontal cortex, hippocampus, medial septum, successive alleys, animal model
BioMEMS and Cellular Biology: Perspectives and Applications
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
Light/dark Transition Test for Mice
Institutions: Graduate School of Medicine, Kyoto University.
Although all of the mouse genome sequences have been determined, we do not yet know the functions of most of these genes. Gene-targeting techniques, however, can be used to delete or manipulate a specific gene in mice. The influence of a given gene on a specific behavior can then be determined by conducting behavioral analyses of the mutant mice. As a test for behavioral phenotyping of mutant mice, the light/dark transition test is one of the most widely used tests to measure anxiety-like behavior in mice. The test is based on the natural aversion of mice to brightly illuminated areas and on their spontaneous exploratory behavior in novel environments. The test is sensitive to anxiolytic drug treatment. The apparatus consists of a dark chamber and a brightly illuminated chamber. Mice are allowed to move freely between the two chambers. The number of entries into the bright chamber and the duration of time spent there are indices of bright-space anxiety in mice. To obtain phenotyping results of a strain of mutant mice that can be readily reproduced and compared with those of other mutants, the behavioral test methods should be as identical as possible between laboratories. The procedural differences that exist between laboratories, however, make it difficult to replicate or compare the results among laboratories. Here, we present our protocol for the light/dark transition test as a movie so that the details of the protocol can be demonstrated. In our laboratory, we have assessed more than 60 strains of mutant mice using the protocol shown in the movie. Those data will be disclosed as a part of a public database that we are now constructing.
Visualization of the protocol will facilitate understanding of the details of the entire experimental procedure, allowing for standardization of the protocols used across laboratories and comparisons of the behavioral phenotypes of various strains of mutant mice assessed using this test.
Neuroscience, Issue 1, knockout mice, transgenic mice, behavioral test, phenotyping