It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
26 Related JoVE Articles!
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Use of an Eight-arm Radial Water Maze to Assess Working and Reference Memory Following Neonatal Brain Injury
Institutions: Rhode Island College, Rhode Island College.
Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.
Behavior, Issue 82, working memory, reference memory, hypoxia-ischemia, radial arm maze, water maze
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Visualizing Protein-DNA Interactions in Live Bacterial Cells Using Photoactivated Single-molecule Tracking
Institutions: University of Oxford, University of Oxford.
Protein-DNA interactions are at the heart of many fundamental cellular processes. For example, DNA replication, transcription, repair, and chromosome organization are governed by DNA-binding proteins that recognize specific DNA structures or sequences. In vitro
experiments have helped to generate detailed models for the function of many types of DNA-binding proteins, yet, the exact mechanisms of these processes and their organization in the complex environment of the living cell remain far less understood. We recently introduced a method for quantifying DNA-repair activities in live Escherichia coli
cells using Photoactivated Localization Microscopy (PALM) combined with single-molecule tracking. Our general approach identifies individual DNA-binding events by the change in the mobility of a single protein upon association with the chromosome. The fraction of bound molecules provides a direct quantitative measure for the protein activity and abundance of substrates or binding sites at the single-cell level. Here, we describe the concept of the method and demonstrate sample preparation, data acquisition, and data analysis procedures.
Immunology, Issue 85, Super-resolution microscopy, single-particle tracking, Live-cell imaging, DNA-binding proteins, DNA repair, molecular diffusion
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Coordinate Mapping of Hyolaryngeal Mechanics in Swallowing
Institutions: Georgia Regents University, New York University, Georgia Regents University, Georgia Regents University.
Characterizing hyolaryngeal movement is important to dysphagia research. Prior methods require multiple measurements to obtain one kinematic measurement whereas coordinate mapping of hyolaryngeal mechanics using Modified Barium Swallow (MBS) uses one set of coordinates to calculate multiple variables of interest. For demonstration purposes, ten kinematic measurements were generated from one set of coordinates to determine differences in swallowing two different bolus types. Calculations of hyoid excursion against the vertebrae and mandible are correlated to determine the importance of axes of reference.
To demonstrate coordinate mapping methodology, 40 MBS studies were randomly selected from a dataset of healthy normal subjects with no known swallowing impairment. A 5 ml thin-liquid bolus and a 5 ml pudding swallows were measured from each subject. Nine coordinates, mapping the cranial base, mandible, vertebrae and elements of the hyolaryngeal complex, were recorded at the frames of minimum and maximum hyolaryngeal excursion. Coordinates were mathematically converted into ten variables of hyolaryngeal mechanics.
Inter-rater reliability was evaluated by Intraclass correlation coefficients (ICC). Two-tailed t-tests were used to evaluate differences in kinematics by bolus viscosity. Hyoid excursion measurements against different axes of reference were correlated. Inter-rater reliability among six raters for the 18 coordinates ranged from ICC = 0.90 - 0.97. A slate of ten kinematic measurements was compared by subject between the six raters. One outlier was rejected, and the mean of the remaining reliability scores was ICC = 0.91, 0.84 - 0.96, 95% CI. Two-tailed t-tests with Bonferroni corrections comparing ten kinematic variables (5 ml thin-liquid vs. 5 ml pudding swallows) showed statistically significant differences in hyoid excursion, superior laryngeal movement, and pharyngeal shortening (p
< 0.005). Pearson correlations of hyoid excursion measurements from two different axes of reference were: r = 0.62, r2
= 0.38, (thin-liquid); r = 0.52, r2
= 0.27, (pudding).
Obtaining landmark coordinates is a reliable method to generate multiple kinematic variables from video fluoroscopic images useful in dysphagia research.
Medicine, Issue 87, videofluoroscopy, modified barium swallow studies, hyolaryngeal kinematics, deglutition, dysphagia, dysphagia research, hyolaryngeal complex
A Protocol for Conducting Rainfall Simulation to Study Soil Runoff
Institutions: University of Maryland Eastern Shore, USDA - Agricultural Research Service, University of Maryland Eastern Shore.
Rainfall is a driving force for the transport of environmental contaminants from agricultural soils to surficial water bodies via surface runoff. The objective of this study was to characterize the effects of antecedent soil moisture content on the fate and transport of surface applied commercial urea, a common form of nitrogen (N) fertilizer, following a rainfall event that occurs within 24 hr after fertilizer application. Although urea is assumed to be readily hydrolyzed to ammonium and therefore not often available for transport, recent studies suggest that urea can be transported from agricultural soils to coastal waters where it is implicated in harmful algal blooms. A rainfall simulator was used to apply a consistent rate of uniform rainfall across packed soil boxes that had been prewetted to different soil moisture contents. By controlling rainfall and soil physical characteristics, the effects of antecedent soil moisture on urea loss were isolated. Wetter soils exhibited shorter time from rainfall initiation to runoff initiation, greater total volume of runoff, higher urea concentrations in runoff, and greater mass loadings of urea in runoff. These results also demonstrate the importance of controlling for antecedent soil moisture content in studies designed to isolate other variables, such as soil physical or chemical characteristics, slope, soil cover, management, or rainfall characteristics. Because rainfall simulators are designed to deliver raindrops of similar size and velocity as natural rainfall, studies conducted under a standardized protocol can yield valuable data that, in turn, can be used to develop models for predicting the fate and transport of pollutants in runoff.
Environmental Sciences, Issue 86, Agriculture, Water Pollution, Water Quality, Technology, Industry, and Agriculture, Rainfall Simulator, Artificial Rainfall, Runoff, Packed Soil Boxes, Nonpoint Source, Urea
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis
and M. faveolata
. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae
endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis
and M. faveolata
contain similar types of chlorophyll and chromatophores. However, M. annularis
and M. faveolata
exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e.
, “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g.
, neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g
., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91,
Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
Measuring the Mechanical Properties of Living Cells Using Atomic Force Microscopy
Institutions: Worcester Polytechnic Institute, Worcester Polytechnic Institute.
Mechanical properties of cells and extracellular matrix (ECM) play important roles in many biological processes including stem cell differentiation, tumor formation, and wound healing. Changes in stiffness of cells and ECM are often signs of changes in cell physiology or diseases in tissues. Hence, cell stiffness is an index to evaluate the status of cell cultures. Among the multitude of methods applied to measure the stiffness of cells and tissues, micro-indentation using an Atomic Force Microscope (AFM) provides a way to reliably measure the stiffness of living cells. This method has been widely applied to characterize the micro-scale stiffness for a variety of materials ranging from metal surfaces to soft biological tissues and cells. The basic principle of this method is to indent a cell with an AFM tip of selected geometry and measure the applied force from the bending of the AFM cantilever. Fitting the force-indentation curve to the Hertz model for the corresponding tip geometry can give quantitative measurements of material stiffness. This paper demonstrates the procedure to characterize the stiffness of living cells using AFM. Key steps including the process of AFM calibration, force-curve acquisition, and data analysis using a MATLAB routine are demonstrated. Limitations of this method are also discussed.
Biophysics, Issue 76, Bioengineering, Cellular Biology, Molecular Biology, Physics, Chemical Engineering, Biomechanics, bioengineering (general), AFM, cell stiffness, microindentation, force spectroscopy, atomic force microscopy, microscopy
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
A Laser-induced Mouse Model of Chronic Ocular Hypertension to Characterize Visual Defects
Institutions: Northwestern University, Northwestern University.
Glaucoma, frequently associated with elevated intraocular pressure (IOP), is one of the leading causes of blindness. We sought to establish a mouse model of ocular hypertension to mimic human high-tension glaucoma. Here laser illumination is applied to the corneal limbus to photocoagulate the aqueous outflow, inducing angle closure. The changes of IOP are monitored using a rebound tonometer before and after the laser treatment. An optomotor behavioral test is used to measure corresponding changes in visual capacity. The representative result from one mouse which developed sustained IOP elevation after laser illumination is shown. A decreased visual acuity and contrast sensitivity is observed in this ocular hypertensive mouse. Together, our study introduces a valuable model system to investigate neuronal degeneration and the underlying molecular mechanisms in glaucomatous mice.
Medicine, Issue 78, Biomedical Engineering, Neurobiology, Anatomy, Physiology, Neuroscience, Cellular Biology, Molecular Biology, Ophthalmology, Retinal Neurons, Retinal Neurons, Retinal Ganglion Cells, Neurodegenerative Diseases, Ocular Hypertension, Retinal Degeneration, Vision Tests, Visual Acuity, Eye Diseases, Retinal Ganglion Cell (RGC), Ocular Hypertension, Laser Photocoagulation, Intraocular pressure (IOP), Tonometer; Visual Acuity, Contrast Sensitivity, Optomotor, animal model
An Investigation of the Effects of Sports-related Concussion in Youth Using Functional Magnetic Resonance Imaging and the Head Impact Telemetry System
Institutions: University of Toronto, University of Toronto, University of Toronto, Bloorview Kids Rehab, Toronto Rehab, Sunnybrook Health Sciences Centre, University of Toronto.
One of the most commonly reported injuries in children who participate in sports is concussion or mild traumatic brain injury (mTBI)1
. Children and youth involved in organized sports such as competitive hockey are nearly six times more likely to suffer a severe concussion compared to children involved in other leisure physical activities2
. While the most common cognitive sequelae of mTBI appear similar for children and adults, the recovery profile and breadth of consequences in children remains largely unknown2
, as does the influence of pre-injury characteristics (e.g. gender) and injury details (e.g. magnitude and direction of impact) on long-term outcomes. Competitive sports, such as hockey, allow the rare opportunity to utilize a pre-post design to obtain pre-injury data before concussion occurs on youth characteristics and functioning and to relate this to outcome following injury. Our primary goals are to refine pediatric concussion diagnosis and management based on research evidence that is specific to children and youth. To do this we use new, multi-modal and integrative approaches that will:
1.Evaluate the immediate effects of head trauma in youth
2.Monitor the resolution of post-concussion symptoms (PCS) and cognitive performance during recovery
3.Utilize new methods to verify brain injury and recovery
To achieve our goals, we have implemented the Head Impact Telemetry (HIT) System. (Simbex; Lebanon, NH, USA). This system equips commercially available Easton S9 hockey helmets (Easton-Bell Sports; Van Nuys, CA, USA) with single-axis accelerometers designed to measure real-time head accelerations during contact sport participation 3 - 5
. By using telemetric technology, the magnitude of acceleration and location of all head impacts during sport participation can be objectively detected and recorded. We also use functional magnetic resonance imaging (fMRI) to localize and assess changes in neural activity specifically in the medial temporal and frontal lobes during the performance of cognitive tasks, since those are the cerebral regions most sensitive to concussive head injury 6
. Finally, we are acquiring structural imaging data sensitive to damage in brain white matter.
Medicine, Issue 47, Mild traumatic brain injury, concussion, fMRI, youth, Head Impact Telemetry System
Shallow Water (Paddling) Variants of Water Maze Tests in Mice
Institutions: University of Oxford.
When Richard Morris devised his water maze in 19817
, most behavioral work was done in rats. However, the greater understanding of mouse genetics led to the mouse becoming increasingly important. But researchers found that some strains of mutant mice were prone to problems like passively floating or diving when they were tested in the Morris water maze11
. This was unsurprising considering their natural habitat; rats swim naturally (classically, the "sewer rat"), whereas mice evolved in the dry areas of central Asia.
To overcome these problems, it was considered whether shallow water would be a sufficient stimulus to provide escape motivation for mice. This would also avoid the problems of drying the small creatures with a towel and then putting them in a heated recovery chamber to avoid hypothermia, which is a much more serious problem than with rats; the large ratio of surface area to volume of a mouse makes it particularly vulnerable to rapid heat loss.
Another consideration was whether a more natural escape strategy could be used, to facilitate learning. Since animals that fall into water and swim away from the safety of the shore are unlikely to pass on their genes, animals have evolved a natural tendency to swim to the edge of a body of water. The Morris water maze, however, requires them to swim to a hidden platform towards the center of the maze - exactly opposite to their evolved behavior. Therefore the paddling maze should incorporate escape to the edge of the apparatus. This feature, coupled with the use of relatively non-aversive shallow water, embodies the "Refinement" aspect of the "3 Rs" of Russell and Burch8
Various types of maze design were tried; the common feature was that the water was always shallow (2 cm deep) and escape was via a tube piercing the transparent wall of the apparatus. Other tubes ("false exits") were also placed around the walls but these were blocked off. From the inside of the maze all false exits and the single true exit looked the same. Currently a dodecagonal (12-sided) maze is in use in Oxford, with 12 true/false exits set in the corners. In a recent development a transparent paddling Y-maze has been tested successfully.
Behavior, Issue 76, Neuroscience, Neurobiology, Medicine, Psychology, Mice, hippocampus, paddling pool, Alzheimer's, welfare, 3Rs, Morris water maze, paddling Y-maze, Barnes maze, animal model
Bromodeoxyuridine (BrdU) Labeling and Subsequent Fluorescence Activated Cell Sorting for Culture-independent Identification of Dissolved Organic Carbon-degrading Bacterioplankton
Institutions: Kent State University, University of Georgia (UGA).
Microbes are major agents mediating the degradation of numerous dissolved organic carbon (DOC) substrates in aquatic environments. However, identification of bacterial taxa that transform specific pools of DOC in nature poses a technical challenge.
Here we describe an approach that couples bromodeoxyuridine (BrdU) incorporation, fluorescence activated cell sorting (FACS), and 16S rRNA gene-based molecular analysis that allows culture-independent identification of bacterioplankton capable of degrading a specific DOC compound in aquatic environments. Triplicate bacterioplankton microcosms are set up to receive both BrdU and a model DOC compound (DOC amendments), or only BrdU (no-addition control). BrdU substitutes the positions of thymidine in newly synthesized bacterial DNA and BrdU-labeled DNA can be readily immunodetected 1,2
. Through a 24-hr incubation, bacterioplankton that are able to use the added DOC compound are expected to be selectively activated, and therefore have higher levels of BrdU incorporation (HI cells) than non-responsive cells in the DOC amendments and cells in no-addition controls (low BrdU incorporation cells, LI cells). After fluorescence immunodetection, HI cells are distinguished and physically separated from the LI cells by fluorescence activated cell sorting (FACS) 3
. Sorted DOC-responsive cells (HI cells) are extracted for DNA and taxonomically identified through subsequent 16S rRNA gene-based analyses including PCR, clone library construction and sequencing.
Molecular Biology, Issue 55, BrdU incorporation, fluorescence-activated cell sorting, FACS, flow cytometry, microbial community, culture-independent, bacterioplankton
Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods
Institutions: Yale School of Medicine, Yale School of Medicine, New York University , New York University , New York University .
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity1,2
, the degree of those aversions vary substantially across individuals, such that the subjective value
of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method3
to assess the neural representation of the subjective values of risky and ambiguous options4
. This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations.
In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Neuroscience, Issue 67, Medicine, Molecular Biology, fMRI, magnetic resonance imaging, decision-making, value, uncertainty, risk, ambiguity
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e
., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g
., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori
, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources.
Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e
., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Angle-resolved Photoemission Spectroscopy At Ultra-low Temperatures
Institutions: IFW-Dresden, Institute of Metal Physics of National Academy of Sciences of Ukraine, Diamond Light Source LTD, University of Johannesburg, Università di Salerno, École Polytechnique Fédérale de Lausanne.
The physical properties of a material are defined by its electronic structure. Electrons in solids are characterized by energy (ω) and momentum (k
) and the probability to find them in a particular state with given ω and k
is described by the spectral function A(k
, ω). This function can be directly measured in an experiment based on the well-known photoelectric effect, for the explanation of which Albert Einstein received the Nobel Prize back in 1921. In the photoelectric effect the light shone on a surface ejects electrons from the material. According to Einstein, energy conservation allows one to determine the energy of an electron inside the sample, provided the energy of the light photon and kinetic energy of the outgoing photoelectron are known. Momentum conservation makes it also possible to estimate k
relating it to the momentum of the photoelectron by measuring the angle at which the photoelectron left the surface. The modern version of this technique is called Angle-Resolved Photoemission Spectroscopy (ARPES) and exploits both conservation laws in order to determine the electronic structure, i.e.
energy and momentum of electrons inside the solid. In order to resolve the details crucial for understanding the topical problems of condensed matter physics, three quantities need to be minimized: uncertainty* in photon energy, uncertainty in kinetic energy of photoelectrons and temperature of the sample.
In our approach we combine three recent achievements in the field of synchrotron radiation, surface science and cryogenics. We use synchrotron radiation with tunable photon energy contributing an uncertainty of the order of 1 meV, an electron energy analyzer which detects the kinetic energies with a precision of the order of 1 meV and a He3
cryostat which allows us to keep the temperature of the sample below 1 K. We discuss the exemplary results obtained on single crystals of Sr2
and some other materials. The electronic structure of this material can be determined with an unprecedented clarity.
Physics, Issue 68, Chemistry, electron energy bands, band structure of solids, superconducting materials, condensed matter physics, ARPES, angle-resolved photoemission synchrotron, imaging
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Concurrent Quantitative Conductivity and Mechanical Properties Measurements of Organic Photovoltaic Materials using AFM
Institutions: Argonne National Laboratory, University of Chicago.
Organic photovoltaic (OPV) materials are inherently inhomogeneous at the nanometer scale. Nanoscale inhomogeneity of OPV materials affects performance of photovoltaic devices. Thus, understanding of spatial variations in composition as well as electrical properties of OPV materials is of paramount importance for moving PV technology forward.1,2
In this paper, we describe a protocol for quantitative measurements of electrical and mechanical properties of OPV materials with sub-100 nm resolution. Currently, materials properties measurements performed using commercially available AFM-based techniques (PeakForce, conductive AFM) generally provide only qualitative information. The values for resistance as well as Young's modulus measured using our method on the prototypical ITO/PEDOT:PSS/P3HT:PC61
BM system correspond well with literature data. The P3HT:PC61
BM blend separates onto PC61
BM-rich and P3HT-rich domains. Mechanical properties of PC61
BM-rich and P3HT-rich domains are different, which allows for domain attribution on the surface of the film. Importantly, combining mechanical and electrical data allows for correlation of the domain structure on the surface of the film with electrical properties variation measured through the thickness of the film.
Materials Science, Issue 71, Nanotechnology, Mechanical Engineering, Electrical Engineering, Computer Science, Physics, electrical transport properties in solids, condensed matter physics, thin films (theory, deposition and growth), conductivity (solid state), AFM, atomic force microscopy, electrical properties, mechanical properties, organic photovoltaics, microengineering, photovoltaics
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
The Ladder Rung Walking Task: A Scoring System and its Practical Application.
Institutions: University of Lethbridge.
Progress in the development of animal models for/stroke, spinal cord injury, and other neurodegenerative disease requires tests of high sensitivity to elaborate distinct aspects of motor function and to determine even subtle loss of movement capacity. To enhance efficacy and resolution of testing, tests should permit qualitative and quantitative measures of motor function and be sensitive to changes in performance during recovery periods. The present study describes a new task to assess skilled walking in the rat to measure both forelimb and hindlimb function at the same time. Animals are required to walk along a horizontal ladder on which the spacing of the rungs is variable and is periodically changed. Changes in rung spacing prevent animals from learning the absolute and relative location of the rungs and so minimize the ability of the animals to compensate for impairments through learning. In addition, changing the spacing between the rungs allows the test to be used repeatedly in long-term studies. Methods are described for both quantitative and qualitative description of both fore- and hindlimb performance, including limb placing, stepping, co-ordination. Furthermore, use of compensatory strategies is indicated by missteps or compensatory steps in response to another limb’s misplacement.
Neuroscience, Issue 28, rat, animal model of walking, skilled movement, ladder test, rung test, neuroscience