The open field activity monitoring system comprehensively assesses locomotor and behavioral activity levels of mice. It is a useful tool for assessing locomotive impairment in animal models of neuromuscular disease and efficacy of therapeutic drugs that may improve locomotion and/or muscle function. The open field activity measurement provides a different measure than muscle strength, which is commonly assessed by grip strength measurements. It can also show how drugs may affect other body systems as well when used with additional outcome measures. In addition, measures such as total distance traveled mirror the 6 min walk test, a clinical trial outcome measure. However, open field activity monitoring is also associated with significant challenges: Open field activity measurements vary according to animal strain, age, sex, and circadian rhythm. In addition, room temperature, humidity, lighting, noise, and even odor can affect assessment outcomes. Overall, this manuscript provides a well-tested and standardized open field activity SOP for preclinical trials in animal models of neuromuscular diseases. We provide a discussion of important considerations, typical results, data analysis, and detail the strengths and weaknesses of open field testing. In addition, we provide recommendations for optimal study design when using open field activity in a preclinical trial.
22 Related JoVE Articles!
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Measuring Attentional Biases for Threat in Children and Adults
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g.,
snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g.,
flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Thermal Ablation for the Treatment of Abdominal Tumors
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison.
Percutaneous thermal ablation is an emerging treatment option for many tumors of the abdomen not amenable to conventional treatments. During a thermal ablation procedure, a thin applicator is guided into the target tumor under imaging guidance. Energy is then applied to the tissue until temperatures rise to cytotoxic levels (50-60 °C). Various energy sources are available to heat biological tissues, including radiofrequency (RF) electrical current, microwaves, laser light and ultrasonic waves. Of these, RF and microwave ablation are most commonly used worldwide.
During RF ablation, alternating electrical current (~500 kHz) produces resistive heating around the interstitial electrode. Skin surface electrodes (ground pads) are used to complete the electrical circuit. RF ablation has been in use for nearly 20 years, with good results for local tumor control, extended survival and low complication rates1,2
. Recent studies suggest RF ablation may be a first-line treatment option for small hepatocellular carcinoma and renal-cell carcinoma3-5
. However, RF heating is hampered by local blood flow and high electrical impedance tissues (eg, lung, bone, desiccated or charred tissue)6,7
. Microwaves may alleviate some of these problems by producing faster, volumetric heating8-10
. To create larger or conformal ablations, multiple microwave antennas can be used simultaneously while RF electrodes require sequential operation, which limits their efficiency. Early experiences with microwave systems suggest efficacy and safety similar to, or better than RF devices11-13
Alternatively, cryoablation freezes the target tissues to lethal levels (-20 to -40 °C). Percutaneous cryoablation has been shown to be effective against RCC and many metastatic tumors, particularly colorectal cancer, in the liver14-16
. Cryoablation may also be associated with less post-procedure pain and faster recovery for some indications17
. Cryoablation is often contraindicated for primary liver cancer due to underlying coagulopathy and associated bleeding risks frequently seen in cirrhotic patients. In addition, sudden release of tumor cellular contents when the frozen tissue thaws can lead to a potentially serious condition known as cryoshock 16
Thermal tumor ablation can be performed at open surgery, laparoscopy or using a percutaneous approach. When performed percutaneously, the ablation procedure relies on imaging for diagnosis, planning, applicator guidance, treatment monitoring and follow-up. Ultrasound is the most popular modality for guidance and treatment monitoring worldwide, but computed tomography (CT) and magnetic resonance imaging (MRI) are commonly used as well. Contrast-enhanced CT or MRI are typically employed for diagnosis and follow-up imaging.
Medicine, Issue 49, Thermal ablation, interventional oncology, image-guided therapy, radiology, cancer
Morris Water Maze Test for Learning and Memory Deficits in Alzheimer's Disease Model Mice
Institutions: University of British Columbia.
The Morris Water Maze (MWM) was first established by neuroscientist Richard G. Morris in 1981 in order to test hippocampal-dependent learning, including acquisition of spatial memoryand long-term spatial memory 1
. The MWM is a relatively simple procedure typically consisting of six day trials, the main advantage being the differentiation between the spatial (hidden-platform) and non-spatial (visible platform) conditions 2-4
. In addition, the MWM testing environment reduces odor trail interference 5
. This has led the task to be used extensively in the study of the neurobiology and neuropharmacology of spatial learning and memory. The MWM plays an important role in the validation of rodent models for neurocognitive disorders such as Alzheimer’s Disease 6, 7
. In this protocol we discussed the typical procedure of MWM for testing learning and memory and data analysis commonly used in Alzheimer’s disease transgenic model mice.
Neuroscience, Issue 53, Morris Water Maze, spatial memory testing, hippocampal dependent learning, Alzheimer's Disease
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Institutions: Ben-Gurion University.
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake.
Neuroscience, Issue 46, Turing test, Human Machine Interface, Haptics, Teleoperation, Motor Control, Motor Behavior, Diagnostics, Perception, handshake, telepresence
High-throughput Crystallization of Membrane Proteins Using the Lipidic Bicelle Method
Institutions: University of California Los Angeles , David Geffen School of Medicine, UCLA.
Membrane proteins (MPs) play a critical role in many physiological processes such as pumping specific molecules across the otherwise impermeable membrane bilayer that surrounds all cells and organelles. Alterations in the function of MPs result in many human diseases and disorders; thus, an intricate understanding of their structures remains a critical objective for biological research. However, structure determination of MPs remains a significant challenge often stemming from their hydrophobicity.
MPs have substantial hydrophobic regions embedded within the bilayer. Detergents are frequently used to solubilize these proteins from the bilayer generating a protein-detergent micelle that can then be manipulated in a similar manner as soluble proteins. Traditionally, crystallization trials proceed using a protein-detergent mixture, but they often resist crystallization or produce crystals of poor quality. These problems arise due to the detergent′s inability to adequately mimic the bilayer resulting in poor stability and heterogeneity. In addition, the detergent shields the hydrophobic surface of the MP reducing the surface area available for crystal contacts. To circumvent these drawbacks MPs can be crystallized in lipidic media, which more closely simulates their endogenous environment, and has recently become a de novo technique for MP crystallization.
Lipidic cubic phase (LCP) is a three-dimensional lipid bilayer penetrated by an interconnected system of aqueous channels1
. Although monoolein is the lipid of choice, related lipids such as monopalmitolein and monovaccenin have also been used to make LCP2
. MPs are incorporated into the LCP where they diffuse in three dimensions and feed crystal nuclei. A great advantage of the LCP is that the protein remains in a more native environment, but the method has a number of technical disadvantages including high viscosity (requiring specialized apparatuses) and difficulties in crystal visualization and manipulation3,4
. Because of these technical difficulties, we utilized another lipidic medium for crystallization-bicelles5,6
). Bicelles are lipid/amphiphile mixtures formed by blending a phosphatidylcholine lipid (DMPC) with an amphiphile (CHAPSO) or a short-chain lipid (DHPC). Within each bicelle disc, the lipid molecules generate a bilayer while the amphiphile molecules line the apolar edges providing beneficial properties of both bilayers and detergents. Importantly, below their transition temperature, protein-bicelle mixtures have a reduced viscosity and are manipulated in a similar manner as detergent-solubilized MPs, making bicelles compatible with crystallization robots.
Bicelles have been successfully used to crystallize several membrane proteins5,7-11 (Table 1)
. This growing collection of proteins demonstrates the versatility of bicelles for crystallizing both alpha helical and beta sheet MPs from prokaryotic and eukaryotic sources. Because of these successes and the simplicity of high-throughput implementation, bicelles should be part of every membrane protein crystallographer′s arsenal. In this video, we describe the bicelle methodology and provide a step-by-step protocol for setting up high-throughput crystallization trials of purified MPs using standard robotics.
Molecular Biology, Issue 59, membrane proteins crystallization, bicelle, lipidic crystallization
A Method for Investigating Age-related Differences in the Functional Connectivity of Cognitive Control Networks Associated with Dimensional Change Card Sort Performance
Institutions: University of Western Ontario.
The ability to adjust behavior to sudden changes in the environment develops gradually in childhood and adolescence. For example, in the Dimensional Change Card Sort task, participants switch from sorting cards one way, such as shape, to sorting them a different way, such as color. Adjusting behavior in this way exacts a small performance cost, or switch cost, such that responses are typically slower and more error-prone on switch trials in which the sorting rule changes as compared to repeat trials in which the sorting rule remains the same. The ability to flexibly adjust behavior is often said to develop gradually, in part because behavioral costs such as switch costs typically decrease with increasing age. Why aspects of higher-order cognition, such as behavioral flexibility, develop so gradually remains an open question. One hypothesis is that these changes occur in association with functional changes in broad-scale cognitive control networks. On this view, complex mental operations, such as switching, involve rapid interactions between several distributed brain regions, including those that update and maintain task rules, re-orient attention, and select behaviors. With development, functional connections between these regions strengthen, leading to faster and more efficient switching operations. The current video describes a method of testing this hypothesis through the collection and multivariate analysis of fMRI data from participants of different ages.
Behavior, Issue 87, Neurosciences, fMRI, Cognitive Control, Development, Functional Connectivity
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Institutions: Rensselaer Polytechnic Institute.
has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae
has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g.
, food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera
) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides.
We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Dissection of Larval CNS in Drosophila Melanogaster
Institutions: Princeton University.
The central nervous system (CNS) of Drosophila larvae is complex and poorly understood. One way to investigate the CNS is to use immunohistochemistry to examine the expression of various novel and marker proteins. Staining of whole larvae is impractical because the tough cuticle prevents antibodies from penetrating inside the body cavity. In order to stain these tissues it is necessary to dissect the animal prior to fixing and staining. In this article we demonstrate how to dissect Drosophila larvae without damaging the CNS. Begin by tearing the larva in half with a pair of fine forceps, and then turn the cuticle "inside-out" to expose the CNS. If the dissection is performed carefully the CNS will remain attached to the cuticle. We usually keep the CNS attached to the cuticle throughout the fixation and staining steps, and only completely remove the CNS from the cuticle just prior to mounting the samples on glass slides. We also show some representative images of a larval CNS stained with Eve, a transcription factor expressed in a subset of neurons in the CNS. The article concludes with a discussion of some of the practical uses of this technique and the potential difficulties that may arise.
Developmental Biology, Issue 1, Drosophila, fly, CNS, larvae
BioMEMS and Cellular Biology: Perspectives and Applications
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
Phase Contrast and Differential Interference Contrast (DIC) Microscopy
Institutions: University of Texas Health Science Center at San Antonio (UTHSCSA).
Phase-contrast microscopy is often used to produce contrast for transparent, non light-absorbing, biological specimens. The technique was discovered by Zernike, in 1942, who received the Nobel prize for his achievement. DIC microscopy, introduced in the late 1960s, has been popular in biomedical research because it highlights edges of specimen structural detail, provides high-resolution optical sections of thick specimens including tissue cells, eggs, and embryos and does not suffer from the phase halos typical of phase-contrast images. This protocol highlights the principles and practical applications of these microscopy techniques.
Basic protocols, Issue 18, Current Protocols Wiley, Microscopy, Phase Contrast, Difference Interference Contrast
Applying Microfluidics to Electrophysiology
Institutions: University of Illinois, Chicago.
Microfluidics can be integrated with standard electrophysiology techniques to allow new experimental modalities. Specifically, the motivation for the microfluidic brain slice device is discussed including how the device docks to standard perfusion chambers and the technique of passive pumping which is used to deliver boluses of neuromodulators to the brain slice. By simplifying the device design, we are able to achieve a practical solution to the current unmet electrophysiology need of applying multiple neuromodulators across multiple regions of the brain slice. This is achieved by substituting the standard coverglass substrate of the perfusion chamber with a thin microfluidic device bonded to the coverglass substrate. This was then attached to the perfusion chamber and small holes connect the open-well of the perfusion chamber to the microfluidic channels buried within the microfluidic substrate. These microfluidic channels are interfaced with ports drilled into the edge of the perfusion chamber to access and deliver stimulants. This project represents how the field of microfluidics is transitioning away from proof-of concept device demonstrations and into practical solutions for unmet experimental and clinical needs.
Neuroscience, Issue 8, Biomedical Engineering, Microfluidics, Slice Recording, Electrophysiology, Neurotransmitter, Bioengineering