It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4.
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
23 Related JoVE Articles!
Magnetic Resonance Imaging Quantification of Pulmonary Perfusion using Calibrated Arterial Spin Labeling
Institutions: University of California San Diego - UCSD, University of California San Diego - UCSD, University of California San Diego - UCSD.
This demonstrates a MR imaging method to measure the spatial distribution of pulmonary blood flow in healthy subjects
during normoxia (inspired O2
, fraction (FI
) = 0.21) hypoxia (FI
= 0.125), and hyperoxia
= 1.00). In addition, the physiological responses of the subject are monitored in the MR scan environment. MR images
were obtained on a 1.5 T GE MRI scanner during a breath hold from a sagittal slice in the right lung at functional residual capacity. An arterial
spin labeling sequence (ASL-FAIRER) was used to measure the spatial distribution of pulmonary blood flow 1,2
and a multi-echo fast
gradient echo (mGRE) sequence 3
was used to quantify the regional proton (i.e. H2
O) density, allowing the quantification
of density-normalized perfusion for each voxel (milliliters blood per minute per gram lung tissue).
With a pneumatic switching valve and facemask equipped with a 2-way non-rebreathing valve, different oxygen concentrations
were introduced to the subject in the MR scanner through the inspired gas tubing. A metabolic cart collected expiratory gas via expiratory tubing. Mixed expiratory O2
concentrations, oxygen consumption, carbon dioxide production, respiratory exchange ratio,
respiratory frequency and tidal volume were measured. Heart rate and oxygen saturation were monitored using pulse-oximetry.
Data obtained from a normal subject showed that, as expected, heart rate was higher in hypoxia (60 bpm) than during normoxia (51) or hyperoxia (50) and the arterial oxygen saturation (SpO2
) was reduced during hypoxia to 86%. Mean ventilation was 8.31 L/min BTPS during hypoxia, 7.04 L/min during normoxia, and 6.64 L/min during hyperoxia. Tidal volume was 0.76 L during hypoxia, 0.69 L during normoxia, and 0.67 L during hyperoxia.
Representative quantified ASL data showed that the mean density normalized perfusion was 8.86 ml/min/g during hypoxia, 8.26 ml/min/g during normoxia and 8.46 ml/min/g during hyperoxia, respectively. In this subject, the relative dispersion4
, an index of global heterogeneity, was increased in hypoxia (1.07 during hypoxia, 0.85 during normoxia, and 0.87 during hyperoxia) while the fractal dimension (Ds), another index of heterogeneity reflecting vascular branching structure, was unchanged (1.24 during hypoxia, 1.26 during normoxia, and 1.26 during hyperoxia).
Overview. This protocol will demonstrate the acquisition of data to measure the distribution of pulmonary perfusion noninvasively under conditions of normoxia, hypoxia, and hyperoxia using a magnetic resonance imaging technique known as arterial spin labeling (ASL).
Rationale: Measurement of pulmonary blood flow and lung proton density using MR technique offers high spatial resolution images which can be quantified and the ability to perform repeated measurements under several different physiological conditions. In human studies, PET, SPECT, and CT are commonly used as the alternative techniques. However, these techniques involve exposure to ionizing radiation, and thus are not suitable for repeated measurements in human subjects.
Medicine, Issue 51, arterial spin labeling, lung proton density, functional lung imaging, hypoxic pulmonary vasoconstriction, oxygen consumption, ventilation, magnetic resonance imaging
Label-free in situ Imaging of Lignification in Plant Cell Walls
Institutions: University of California, Berkeley, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Meeting growing energy demands safely and efficiently is a pressing global challenge. Therefore, research into biofuels production that seeks to find cost-effective and sustainable solutions has become a topical and critical task. Lignocellulosic biomass is poised to become the primary source of biomass for the conversion to liquid biofuels1-6
. However, the recalcitrance of these plant cell wall materials to cost-effective and efficient degradation presents a major impediment for their use in the production of biofuels and chemicals4
. In particular, lignin, a complex and irregular poly-phenylpropanoid heteropolymer, becomes problematic to the postharvest deconstruction of lignocellulosic biomass. For example in biomass conversion for biofuels, it inhibits saccharification in processes aimed at producing simple sugars for fermentation7
. The effective use of plant biomass for industrial purposes is in fact largely dependent on the extent to which the plant cell wall is lignified. The removal of lignin is a costly and limiting factor8
and lignin has therefore become a key plant breeding and genetic engineering target in order to improve cell wall conversion.
Analytical tools that permit the accurate rapid characterization of lignification of plant cell walls become increasingly important for evaluating a large number of breeding populations. Extractive procedures for the isolation of native components such as lignin are inevitably destructive, bringing about significant chemical and structural modifications9-11
. Analytical chemical in situ
methods are thus invaluable tools for the compositional and structural characterization of lignocellulosic materials. Raman microscopy is a technique that relies on inelastic or Raman scattering of monochromatic light, like that from a laser, where the shift in energy of the laser photons is related to molecular vibrations and presents an intrinsic label-free molecular "fingerprint" of the sample. Raman microscopy can afford non-destructive and comparatively inexpensive measurements with minimal sample preparation, giving insights into chemical composition and molecular structure in a close to native state. Chemical imaging by confocal Raman microscopy has been previously used for the visualization of the spatial distribution of cellulose and lignin in wood cell walls12-14
. Based on these earlier results, we have recently adopted this method to compare lignification in wild type and lignin-deficient transgenic Populus trichocarpa
(black cottonwood) stem wood15
. Analyzing the lignin Raman bands16,17
in the spectral region between 1,600 and 1,700 cm-1
, lignin signal intensity and localization were mapped in situ
. Our approach visualized differences in lignin content, localization, and chemical composition. Most recently, we demonstrated Raman imaging of cell wall polymers in Arabidopsis thaliana
with lateral resolution that is sub-μm18
. Here, this method is presented affording visualization of lignin in plant cell walls and comparison of lignification in different tissues, samples or species without staining or labeling of the tissues.
Plant Biology, Issue 45, Raman microscopy, lignin, poplar wood, Arabidopsis thaliana
Blastomere Explants to Test for Cell Fate Commitment During Embryonic Development
Institutions: The George Washington University, The George Washington University.
Fate maps, constructed from lineage tracing all of the cells of an embryo, reveal which tissues descend from each cell of the embryo. Although fate maps are very useful for identifying the precursors of an organ and for elucidating the developmental path by which the descendant cells populate that organ in the normal embryo, they do not illustrate the full developmental potential of a precursor cell or identify the mechanisms by which its fate is determined. To test for cell fate commitment, one compares a cell's normal repertoire of descendants in the intact embryo (the fate map) with those expressed after an experimental manipulation. Is the cell's fate fixed (committed) regardless of the surrounding cellular environment, or is it influenced by external factors provided by its neighbors? Using the comprehensive fate maps of the Xenopus
embryo, we describe how to identify, isolate and culture single cleavage stage precursors, called blastomeres. This approach allows one to assess whether these early cells are committed to the fate they acquire in their normal environment in the intact embryo, require interactions with their neighboring cells, or can be influenced to express alternate fates if exposed to other types of signals.
Developmental Biology, Issue 71, Cellular Biology, Molecular Biology, Anatomy, Physiology, Biochemistry, Xenopus laevis, fate mapping, lineage tracing, cell-cell signaling, cell fate, blastomere, embryo, in situ hybridization, animal model
A Noninvasive Method For In situ Determination of Mating Success in Female American Lobsters (Homarus americanus)
Institutions: University of New Hampshire, Massachusetts Division of Marine Fisheries, Boston University, Middle College.
Despite being one of the most productive fisheries in the Northwest Atlantic, much remains unknown about the natural reproductive dynamics of American lobsters. Recent work in exploited crustacean populations (crabs and lobsters) suggests that there are circumstances where mature females are unable to achieve their full reproductive potential due to sperm limitation. To examine this possibility in different regions of the American lobster fishery, a reliable and noninvasive method was developed for sampling large numbers of female lobsters at sea. This method involves inserting a blunt-tipped needle into the female's seminal receptacle to determine the presence or absence of a sperm plug and to withdraw a sample that can be examined for the presence of sperm. A series of control studies were conducted at the dock and in the laboratory to test the reliability of this technique. These efforts entailed sampling 294 female lobsters to confirm that the presence of a sperm plug was a reliable indicator of sperm within the receptacle and thus, mating. This paper details the methodology and the results obtained from a subset of the total females sampled. Of the 230 female lobsters sampled from George's Bank and Cape Ann, MA (size range = 71-145 mm in carapace length), 90.3% were positive for sperm. Potential explanations for the absence of sperm in some females include: immaturity (lack of physiological maturity), breakdown of the sperm plug after being used to fertilize a clutch of eggs, and lack of mating activity. The surveys indicate that this technique for examining the mating success of female lobsters is a reliable proxy that can be used in the field to document reproductive activity in natural populations.
Environmental Sciences, Issue 84, sperm limitation, spermatophore, lobster fishery, sex ratios, sperm receptacle, mating, American lobster, Homarus americanus
In vivo Quantification of G Protein Coupled Receptor Interactions using Spectrally Resolved Two-photon Microscopy
Institutions: University of Wisconsin - Milwaukee, University of Wisconsin - Milwaukee.
The study of protein interactions in living cells is an important area of research because the information accumulated both benefits industrial applications as well as increases basic fundamental biological knowledge. Förster (Fluorescence) Resonance Energy Transfer (FRET) between a donor molecule in an electronically excited state and a nearby acceptor molecule has been frequently utilized for studies of protein-protein interactions in living cells. The proteins of interest are tagged with two different types of fluorescent probes and expressed in biological cells. The fluorescent probes are then excited, typically using laser light, and the spectral properties of the fluorescence emission emanating from the fluorescent probes is collected and analyzed. Information regarding the degree of the protein interactions is embedded in the spectral emission data. Typically, the cell must be scanned a number of times in order to accumulate enough spectral information to accurately quantify the extent of the protein interactions for each region of interest within the cell. However, the molecular composition of these regions may change during the course of the acquisition process, limiting the spatial determination of the quantitative values of the apparent FRET efficiencies to an average over entire cells. By means of a spectrally resolved two-photon microscope, we are able to obtain a full set of spectrally resolved images after only one complete excitation scan of the sample of interest. From this pixel-level spectral data, a map of FRET efficiencies throughout the cell is calculated. By applying a simple theory of FRET in oligomeric complexes to the experimentally obtained distribution of FRET efficiencies throughout the cell, a single spectrally resolved scan reveals stoichiometric and structural information about the oligomer complex under study. Here we describe the procedure of preparing biological cells (the yeast Saccharomyces cerevisiae
) expressing membrane receptors (sterile 2 α-factor receptors) tagged with two different types of fluorescent probes. Furthermore, we illustrate critical factors involved in collecting fluorescence data using the spectrally resolved two-photon microscopy imaging system. The use of this protocol may be extended to study any type of protein which can be expressed in a living cell with a fluorescent marker attached to it.
Cellular Biology, Issue 47, Forster (Fluorescence) Resonance Energy Transfer (FRET), protein-protein interactions, protein complex, in vivo determinations, spectral resolution, two-photon microscopy, G protein-coupled receptor (GPCR), sterile 2 alpha-factor protein (Ste2p)
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ
measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration.
Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release.
The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
Aseptic Laboratory Techniques: Plating Methods
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories
(BMBL) as well as Material Safety Data Sheets
(MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection
(ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to:
● Perform plating procedures without contaminating media.
● Isolate single bacterial colonies by the streak-plating method.
● Use pour-plating and spread-plating methods to determine the concentration of bacteria.
● Perform soft agar overlays when working with phage.
● Transfer bacterial cells from one plate to another using the replica-plating procedure.
● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Assaying Locomotor Activity to Study Circadian Rhythms and Sleep Parameters in Drosophila
Institutions: Rutgers University, University of California, Davis, Rutgers University.
Most life forms exhibit daily rhythms in cellular, physiological and behavioral phenomena that are driven by endogenous circadian (≡24 hr) pacemakers or clocks. Malfunctions in the human circadian system are associated with numerous diseases or disorders. Much progress towards our understanding of the mechanisms underlying circadian rhythms has emerged from genetic screens whereby an easily measured behavioral rhythm is used as a read-out of clock function. Studies using Drosophila
have made seminal contributions to our understanding of the cellular and biochemical bases underlying circadian rhythms. The standard circadian behavioral read-out measured in Drosophila
is locomotor activity. In general, the monitoring system involves specially designed devices that can measure the locomotor movement of Drosophila
. These devices are housed in environmentally controlled incubators located in a darkroom and are based on using the interruption of a beam of infrared light to record the locomotor activity of individual flies contained inside small tubes. When measured over many days, Drosophila
exhibit daily cycles of activity and inactivity, a behavioral rhythm that is governed by the animal's endogenous circadian system. The overall procedure has been simplified with the advent of commercially available locomotor activity monitoring devices and the development of software programs for data analysis. We use the system from Trikinetics Inc., which is the procedure described here and is currently the most popular system used worldwide. More recently, the same monitoring devices have been used to study sleep behavior in Drosophila
. Because the daily wake-sleep cycles of many flies can be measured simultaneously and only 1 to 2 weeks worth of continuous locomotor activity data is usually sufficient, this system is ideal for large-scale screens to identify Drosophila
manifesting altered circadian or sleep properties.
Neuroscience, Issue 43, circadian rhythm, locomotor activity, Drosophila, period, sleep, Trikinetics
Mapping Inhibitory Neuronal Circuits by Laser Scanning Photostimulation
Institutions: University of California, Irvine, University of California, Irvine.
Inhibitory neurons are crucial to cortical function. They comprise about 20% of the entire cortical neuronal population and can be further subdivided into diverse subtypes based on their immunochemical, morphological, and physiological properties1-4
. Although previous research has revealed much about intrinsic properties of individual types of inhibitory neurons, knowledge about their local circuit connections is still relatively limited3,5,6
. Given that each individual neuron's function is shaped by its excitatory and inhibitory synaptic input within cortical circuits, we have been using laser scanning photostimulation (LSPS) to map local circuit connections to specific inhibitory cell types. Compared to conventional electrical stimulation or glutamate puff stimulation, LSPS has unique advantages allowing for extensive mapping and quantitative analysis of local functional inputs to individually recorded neurons3,7-9
. Laser photostimulation via glutamate uncaging selectively activates neurons perisomatically, without activating axons of passage or distal dendrites, which ensures a sub-laminar mapping resolution. The sensitivity and efficiency of LSPS for mapping inputs from many stimulation sites over a large region are well suited for cortical circuit analysis.
Here we introduce the technique of LSPS combined with whole-cell patch clamping for local inhibitory circuit mapping. Targeted recordings of specific inhibitory cell types are facilitated by use of transgenic mice expressing green fluorescent proteins (GFP) in limited inhibitory neuron populations in the cortex3,10
, which enables consistent sampling of the targeted cell types and unambiguous identification of the cell types recorded. As for LSPS mapping, we outline the system instrumentation, describe the experimental procedure and data acquisition, and present examples of circuit mapping in mouse primary somatosensory cortex. As illustrated in our experiments, caged glutamate is activated in a spatially restricted region of the brain slice by UV laser photolysis; simultaneous voltage-clamp recordings allow detection of photostimulation-evoked synaptic responses. Maps of either excitatory or inhibitory synaptic input to the targeted neuron are generated by scanning the laser beam to stimulate hundreds of potential presynaptic sites. Thus, LSPS enables the construction of detailed maps of synaptic inputs impinging onto specific types of inhibitory neurons through repeated experiments. Taken together, the photostimulation-based technique offers neuroscientists a powerful tool for determining the functional organization of local cortical circuits.
Neuroscience, Issue 56, glutamate uncaging, whole cell recording, GFP, transgenic, interneurons
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Laboratory Estimation of Net Trophic Transfer Efficiencies of PCB Congeners to Lake Trout (Salvelinus namaycush) from Its Prey
Institutions: U. S. Geological Survey, Grand Valley State University, Shedd Aquarium.
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi
) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush
) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.
Environmental Sciences, Issue 90, trophic transfer efficiency, polychlorinated biphenyl congeners, lake trout, activity, contaminants, accumulation, risk assessment, toxic equivalents
Measuring Intracellular Ca2+ Changes in Human Sperm using Four Techniques: Conventional Fluorometry, Stopped Flow Fluorometry, Flow Cytometry and Single Cell Imaging
Institutions: Instituto de Biotecnología-Universidad Nacional Autónoma de México, Edison State College.
Spermatozoa are male reproductive cells especially designed to reach, recognize and fuse with the egg. To perform these tasks, sperm cells must be prepared to face a constantly changing environment and to overcome several physical barriers. Being in essence transcriptionally and translationally silent, these motile cells rely profoundly on diverse signaling mechanisms to orient themselves and swim in a directed fashion, and to contend with challenging environmental conditions during their journey to find the egg. In particular, Ca2+
-mediated signaling is pivotal for several sperm functions: activation of motility, capacitation (a complex process that prepares sperm for the acrosome reaction) and the acrosome reaction (an exocytotic event that allows sperm-egg fusion). The use of fluorescent dyes to track intracellular fluctuations of this ion is of remarkable importance due to their ease of application, sensitivity, and versatility of detection. Using one single dye-loading protocol we utilize four different fluorometric techniques to monitor sperm Ca2+
dynamics. Each technique provides distinct information that enables spatial and/or temporal resolution, generating data both at single cell and cell population levels.
Cellular Biology, Issue 75, Medicine, Molecular Biology, Genetics, Biophysics, Anatomy, Physiology, Spermatozoa, Ion Channels, Cell Physiological Processes, Calcium Signaling, Reproductive Physiological Processes, fluorometry, Flow cytometry, stopped flow fluorometry, single-cell imaging, human sperm, sperm physiology, intracellular Ca2+, Ca2+ signaling, Ca2+ imaging, fluorescent dyes, imaging
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Using an Automated 3D-tracking System to Record Individual and Shoals of Adult Zebrafish
Like many aquatic animals, zebrafish (Danio rerio
) moves in a 3D space. It is thus preferable to use a 3D recording system to study its behavior. The presented automatic video tracking system accomplishes this by using a mirror system and a calibration procedure that corrects for the considerable error introduced by the transition of light from water to air. With this system it is possible to record both single and groups of adult zebrafish. Before use, the system has to be calibrated. The system consists of three modules: Recording, Path Reconstruction, and Data Processing. The step-by-step protocols for calibration and using the three modules are presented. Depending on the experimental setup, the system can be used for testing neophobia, white aversion, social cohesion, motor impairments, novel object exploration etc
. It is especially promising as a first-step tool to study the effects of drugs or mutations on basic behavioral patterns. The system provides information about vertical and horizontal distribution of the zebrafish, about the xyz-components of kinematic parameters (such as locomotion, velocity, acceleration, and turning angle) and it provides the data necessary to calculate parameters for social cohesions when testing shoals.
Behavior, Issue 82, neuroscience, Zebrafish, Danio rerio, anxiety, Shoaling, Pharmacology, 3D-tracking, MK801
Harvesting Sperm and Artificial Insemination of Mice
Institutions: University of California, Irvine (UCI).
Rodents of the genus Peromyscus (deer mice) are the most prevalent native North American mammals. Peromyscus species are used in a wide range of research including toxicology, epidemiology, ecology, behavioral, and genetic studies. Here they provide a useful model for demonstrations of artificial insemination.
Methods similar to those displayed here have previously been used in several deer mouse studies, yet no detailed protocol has been published. Here we demonstrate the basic method of artificial insemination. This method entails extracting the testes from the rodent, then isolating the sperm from the epididymis and vas deferens. The mature sperm, now in a milk mixture, are placed in the female’s reproductive tract at the time of ovulation. Fertilization is counted as day 0 for timing of embryo development. Embryos can then be retrieved at the desired time-point and manipulated.
Artificial insemination can be used in a variety of rodent species where exact embryo timing is crucial or hard to obtain. This technique is vital for species or strains (including most Peromyscus) which may not mate immediately and/or where mating is hard to assess. In addition, artificial insemination provides exact timing for embryo development either in mapping developmental progress and/or transgenic work. Reduced numbers of animals can be used since fertilization is guaranteed. This method has been vital to furthering the Peromyscus system, and will hopefully benefit others as well.
Developmental Biology, Issue 3, sperm, mouse, artificial insemination, dissection
Layers of Symbiosis - Visualizing the Termite Hindgut Microbial Community
Institutions: California Institute of Technology - Caltech.
Jared Leadbetter takes us for a nature walk through the diversity of life resident in the termite hindgut - a microenvironment containing 250 different species found nowhere else on Earth. Jared reveals that the symbiosis exhibited by this system is multi-layered and involves not only a relationship between the termite and its gut inhabitants, but also involves a complex web of symbiosis among the gut microbes themselves.
Microbiology, issue 4, microbial community, symbiosis, hindgut
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e.
, lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.
) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization.
Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25
. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7
with a multiobjective evolutionary algorithm SPEA226
, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development