JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
A theoretically-sufficient and computationally-practical technique for deterministic frequency seriation.
PUBLISHED: 04-30-2015
Frequency seriation played a key role in the formation of archaeology as a discipline due to its ability to generate chronologies. Interest in its utility for exploring issues of contemporary interest beyond chronology, however, has been limited. This limitation is partly due to a lack of quantitative algorithms that can be used to build deterministic seriation solutions. When the number of assemblages becomes greater than just a handful, the resources required for evaluation of possible permutations easily outstrips available computing capacity. On the other hand, probabilistic approaches to creating seriations offer a computationally manageable alternative but rely upon a compressed description of the data to order assemblages. This compression removes the ability to use all of the features of our data to fit to the seriation model, obscuring violations of the model, and thus lessens our ability to understand the degree to which the resulting order is chronological, spatial, or a mixture. Recently, frequency seriation has been reconceived as a general method for studying the structure of cultural transmission through time and across space. The use of an evolution-based framework renews the potential for seriation but also calls for a computationally feasible algorithm that is capable of producing solutions under varying configurations, without manual trial and error fitting. Here, we introduce the Iterative Deterministic Seriation Solution (IDSS) for constructing frequency seriations, an algorithm that dramatically constrains the search for potential valid orders of assemblages. Our initial implementation of IDSS does not solve all the problems of seriation, but begins to moves towards a resolution of a long-standing problem in archaeology while opening up new avenues of research into the study of cultural relatedness. We demonstrate the utility of IDSS using late prehistoric decorated ceramics from the Mississippi River Valley. The results compare favorably to previous analyses but add new details into the structure of cultural transmission of these late prehistoric populations.
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Published: 08-06-2013
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
23 Related JoVE Articles!
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
Play Button
Transgenic Rodent Assay for Quantifying Male Germ Cell Mutant Frequency
Authors: Jason M. O'Brien, Marc A. Beal, John D. Gingerich, Lynda Soper, George R. Douglas, Carole L. Yauk, Francesco Marchetti.
Institutions: Environmental Health Centre.
De novo mutations arise mostly in the male germline and may contribute to adverse health outcomes in subsequent generations. Traditional methods for assessing the induction of germ cell mutations require the use of large numbers of animals, making them impractical. As such, germ cell mutagenicity is rarely assessed during chemical testing and risk assessment. Herein, we describe an in vivo male germ cell mutation assay using a transgenic rodent model that is based on a recently approved Organisation for Economic Co-operation and Development (OECD) test guideline. This method uses an in vitro positive selection assay to measure in vivo mutations induced in a transgenic λgt10 vector bearing a reporter gene directly in the germ cells of exposed males. We further describe how the detection of mutations in the transgene recovered from germ cells can be used to characterize the stage-specific sensitivity of the various spermatogenic cell types to mutagen exposure by controlling three experimental parameters: the duration of exposure (administration time), the time between exposure and sample collection (sampling time), and the cell population collected for analysis. Because a large number of germ cells can be assayed from a single male, this method has superior sensitivity compared with traditional methods, requires fewer animals and therefore much less time and resources.
Genetics, Issue 90, sperm, spermatogonia, male germ cells, spermatogenesis, de novo mutation, OECD TG 488, transgenic rodent mutation assay, N-ethyl-N-nitrosourea, genetic toxicology
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Lensless Fluorescent Microscopy on a Chip
Authors: Ahmet F. Coskun, Ting-Wei Su, Ikbal Sencan, Aydogan Ozcan.
Institutions: University of California, Los Angeles .
On-chip lensless imaging in general aims to replace bulky lens-based optical microscopes with simpler and more compact designs, especially for high-throughput screening applications. This emerging technology platform has the potential to eliminate the need for bulky and/or costly optical components through the help of novel theories and digital reconstruction algorithms. Along the same lines, here we demonstrate an on-chip fluorescent microscopy modality that can achieve e.g., <4μm spatial resolution over an ultra-wide field-of-view (FOV) of >0.6-8 cm2 without the use of any lenses, mechanical-scanning or thin-film based interference filters. In this technique, fluorescent excitation is achieved through a prism or hemispherical-glass interface illuminated by an incoherent source. After interacting with the entire object volume, this excitation light is rejected by total-internal-reflection (TIR) process that is occurring at the bottom of the sample micro-fluidic chip. The fluorescent emission from the excited objects is then collected by a fiber-optic faceplate or a taper and is delivered to an optoelectronic sensor array such as a charge-coupled-device (CCD). By using a compressive-sampling based decoding algorithm, the acquired lensfree raw fluorescent images of the sample can be rapidly processed to yield e.g., <4μm resolution over an FOV of >0.6-8 cm2. Moreover, vertically stacked micro-channels that are separated by e.g., 50-100 μm can also be successfully imaged using the same lensfree on-chip microscopy platform, which further increases the overall throughput of this modality. This compact on-chip fluorescent imaging platform, with a rapid compressive decoder behind it, could be rather valuable for high-throughput cytometry, rare-cell research and microarray-analysis.
Bioengineering, Issue 54, Lensless Microscopy, Fluorescent On-chip Imaging, Wide-field Microscopy, On-Chip Cytometry, Compressive Sampling/Sensing
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Convergent Polishing: A Simple, Rapid, Full Aperture Polishing Process of High Quality Optical Flats & Spheres
Authors: Tayyab Suratwala, Rusty Steele, Michael Feit, Rebecca Dylla-Spears, Richard Desjardin, Dan Mason, Lana Wong, Paul Geraghty, Phil Miller, Nan Shen.
Institutions: Lawrence Livermore National Laboratory.
Convergent Polishing is a novel polishing system and method for finishing flat and spherical glass optics in which a workpiece, independent of its initial shape (i.e., surface figure), will converge to final surface figure with excellent surface quality under a fixed, unchanging set of polishing parameters in a single polishing iteration. In contrast, conventional full aperture polishing methods require multiple, often long, iterative cycles involving polishing, metrology and process changes to achieve the desired surface figure. The Convergent Polishing process is based on the concept of workpiece-lap height mismatch resulting in pressure differential that decreases with removal and results in the workpiece converging to the shape of the lap. The successful implementation of the Convergent Polishing process is a result of the combination of a number of technologies to remove all sources of non-uniform spatial material removal (except for workpiece-lap mismatch) for surface figure convergence and to reduce the number of rogue particles in the system for low scratch densities and low roughness. The Convergent Polishing process has been demonstrated for the fabrication of both flats and spheres of various shapes, sizes, and aspect ratios on various glass materials. The practical impact is that high quality optical components can be fabricated more rapidly, more repeatedly, with less metrology, and with less labor, resulting in lower unit costs. In this study, the Convergent Polishing protocol is specifically described for fabricating 26.5 cm square fused silica flats from a fine ground surface to a polished ~λ/2 surface figure after polishing 4 hr per surface on a 81 cm diameter polisher.
Physics, Issue 94, optical fabrication, pad polishing, fused silica glass, optical flats, optical spheres, ceria slurry, pitch button blocking, HF etching, scratches
Play Button
Automated Quantification of Hematopoietic Cell – Stromal Cell Interactions in Histological Images of Undecalcified Bone
Authors: Sandra Zehentmeier, Zoltan Cseresnyes, Juan Escribano Navarro, Raluca A. Niesner, Anja E. Hauser.
Institutions: German Rheumatism Research Center, a Leibniz Institute, German Rheumatism Research Center, a Leibniz Institute, Max-Delbrück Center for Molecular Medicine, Wimasis GmbH, Charité - University of Medicine.
Confocal microscopy is the method of choice for the analysis of localization of multiple cell types within complex tissues such as the bone marrow. However, the analysis and quantification of cellular localization is difficult, as in many cases it relies on manual counting, thus bearing the risk of introducing a rater-dependent bias and reducing interrater reliability. Moreover, it is often difficult to judge whether the co-localization between two cells results from random positioning, especially when cell types differ strongly in the frequency of their occurrence. Here, a method for unbiased quantification of cellular co-localization in the bone marrow is introduced. The protocol describes the sample preparation used to obtain histological sections of whole murine long bones including the bone marrow, as well as the staining protocol and the acquisition of high-resolution images. An analysis workflow spanning from the recognition of hematopoietic and non-hematopoietic cell types in 2-dimensional (2D) bone marrow images to the quantification of the direct contacts between those cells is presented. This also includes a neighborhood analysis, to obtain information about the cellular microenvironment surrounding a certain cell type. In order to evaluate whether co-localization of two cell types is the mere result of random cell positioning or reflects preferential associations between the cells, a simulation tool which is suitable for testing this hypothesis in the case of hematopoietic as well as stromal cells, is used. This approach is not limited to the bone marrow, and can be extended to other tissues to permit reproducible, quantitative analysis of histological data.
Developmental Biology, Issue 98, Image analysis, neighborhood analysis, bone marrow, stromal cells, bone marrow niches, simulation, bone cryosectioning, bone histology
Play Button
Sealable Femtoliter Chamber Arrays for Cell-free Biology
Authors: Sarah Elizabeth Norred, Patrick M. Caveney, Scott T. Retterer, Jonathan B. Boreyko, Jason D. Fowlkes, Charles Patrick Collier, Michael L. Simpson.
Institutions: University of Tennessee, Knoxville, Oak Ridge National Laboratory, University of Tennessee, Knoxville.
Cell-free systems provide a flexible platform for probing specific networks of biological reactions isolated from the complex resource sharing (e.g., global gene expression, cell division) encountered within living cells. However, such systems, used in conventional macro-scale bulk reactors, often fail to exhibit the dynamic behaviors and efficiencies characteristic of their living micro-scale counterparts. Understanding the impact of internal cell structure and scale on reaction dynamics is crucial to understanding complex gene networks. Here we report a microfabricated device that confines cell-free reactions in cellular scale volumes while allowing flexible characterization of the enclosed molecular system. This multilayered poly(dimethylsiloxane) (PDMS) device contains femtoliter-scale reaction chambers on an elastomeric membrane which can be actuated (open and closed). When actuated, the chambers confine Cell-Free Protein Synthesis (CFPS) reactions expressing a fluorescent protein, allowing for the visualization of the reaction kinetics over time using time-lapse fluorescent microscopy. Here we demonstrate how this device may be used to measure the noise structure of CFPS reactions in a manner that is directly analogous to those used to characterize cellular systems, thereby enabling the use of noise biology techniques used in cellular systems to characterize CFPS gene circuits and their interactions with the cell-free environment.
Bioengineering, Issue 97, Cell-free, synthetic biology, microfluidics, noise biology, soft lithography, femtoliter volumes
Play Button
Assessment of Social Cognition in Non-human Primates Using a Network of Computerized Automated Learning Device (ALDM) Test Systems
Authors: Joël Fagot, Yousri Marzouki, Pascal Huguet, Julie Gullstrand, Nicolas Claidière.
Institutions: Aix-Marseille University.
Fagot & Paleressompoulle1 and Fagot & Bonte2 have published an automated learning device (ALDM) for the study of cognitive abilities of monkeys maintained in semi-free ranging conditions. Data accumulated during the last five years have consistently demonstrated the efficiency of this protocol to investigate individual/physical cognition in monkeys, and have further shown that this procedure reduces stress level during animal testing3. This paper demonstrates that networks of ALDM can also be used to investigate different facets of social cognition and in-group expressed behaviors in monkeys, and describes three illustrative protocols developed for that purpose. The first study demonstrates how ethological assessments of social behavior and computerized assessments of cognitive performance could be integrated to investigate the effects of socially exhibited moods on the cognitive performance of individuals. The second study shows that batteries of ALDM running in parallel can provide unique information on the influence of the presence of others on task performance. Finally, the last study shows that networks of ALDM test units can also be used to study issues related to social transmission and cultural evolution. Combined together, these three studies demonstrate clearly that ALDM testing is a highly promising experimental tool for bridging the gap in the animal literature between research on individual cognition and research on social cognition.
Behavior, Issue 99, Baboon, automated learning device, cultural transmission, emotion, social facilitation, cognition, operant conditioning.
Play Button
Construction and Characterization of External Cavity Diode Lasers for Atomic Physics
Authors: Kyle S. Hardman, Shayne Bennetts, John E. Debs, Carlos C. N. Kuhn, Gordon D. McDonald, Nick Robins.
Institutions: The Australian National University.
Since their development in the late 1980s, cheap, reliable external cavity diode lasers (ECDLs) have replaced complex and expensive traditional dye and Titanium Sapphire lasers as the workhorse laser of atomic physics labs1,2. Their versatility and prolific use throughout atomic physics in applications such as absorption spectroscopy and laser cooling1,2 makes it imperative for incoming students to gain a firm practical understanding of these lasers. This publication builds upon the seminal work by Wieman3, updating components, and providing a video tutorial. The setup, frequency locking and performance characterization of an ECDL will be described. Discussion of component selection and proper mounting of both diodes and gratings, the factors affecting mode selection within the cavity, proper alignment for optimal external feedback, optics setup for coarse and fine frequency sensitive measurements, a brief overview of laser locking techniques, and laser linewidth measurements are included.
Physics, Issue 86, External Cavity Diode Laser, atomic spectroscopy, laser cooling, Bose-Einstein condensation, Zeeman modulation
Play Button
Time Multiplexing Super Resolving Technique for Imaging from a Moving Platform
Authors: Asaf Ilovitsh, Shlomo Zach, Zeev Zalevsky.
Institutions: Bar-Ilan University, Kfar Saba, Israel.
We propose a method for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained in a two-step process. First, three low resolution differently defocused images are being captured and the optical phase is retrieved using an improved iterative Gerchberg-Saxton based algorithm. The phase retrieval allows to numerically back propagate the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the Synthetic Aperture Radar (SAR) in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated through laboratory experiment.
Physics, Issue 84, Superresolution, Fourier optics, Remote Sensing and Sensors, Digital Image Processing, optics, resolution
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Play Button
Co-analysis of Brain Structure and Function using fMRI and Diffusion-weighted Imaging
Authors: Jeffrey S. Phillips, Adam S. Greenberg, John A. Pyles, Sudhir K. Pathak, Marlene Behrmann, Walter Schneider, Michael J. Tarr.
Institutions: Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University , University of Pittsburgh.
The study of complex computational systems is facilitated by network maps, such as circuit diagrams. Such mapping is particularly informative when studying the brain, as the functional role that a brain area fulfills may be largely defined by its connections to other brain areas. In this report, we describe a novel, non-invasive approach for relating brain structure and function using magnetic resonance imaging (MRI). This approach, a combination of structural imaging of long-range fiber connections and functional imaging data, is illustrated in two distinct cognitive domains, visual attention and face perception. Structural imaging is performed with diffusion-weighted imaging (DWI) and fiber tractography, which track the diffusion of water molecules along white-matter fiber tracts in the brain (Figure 1). By visualizing these fiber tracts, we are able to investigate the long-range connective architecture of the brain. The results compare favorably with one of the most widely-used techniques in DWI, diffusion tensor imaging (DTI). DTI is unable to resolve complex configurations of fiber tracts, limiting its utility for constructing detailed, anatomically-informed models of brain function. In contrast, our analyses reproduce known neuroanatomy with precision and accuracy. This advantage is partly due to data acquisition procedures: while many DTI protocols measure diffusion in a small number of directions (e.g., 6 or 12), we employ a diffusion spectrum imaging (DSI)1, 2 protocol which assesses diffusion in 257 directions and at a range of magnetic gradient strengths. Moreover, DSI data allow us to use more sophisticated methods for reconstructing acquired data. In two experiments (visual attention and face perception), tractography reveals that co-active areas of the human brain are anatomically connected, supporting extant hypotheses that they form functional networks. DWI allows us to create a "circuit diagram" and reproduce it on an individual-subject basis, for the purpose of monitoring task-relevant brain activity in networks of interest.
Neuroscience, Issue 69, Molecular Biology, Anatomy, Physiology, tractography, connectivity, neuroanatomy, white matter, magnetic resonance imaging, MRI
Play Button
Terahertz Microfluidic Sensing Using a Parallel-plate Waveguide Sensor
Authors: Victoria Astley, Kimberly Reichel, Rajind Mendis, Daniel M. Mittleman.
Institutions: Rice University .
Refractive index (RI) sensing is a powerful noninvasive and label-free sensing technique for the identification, detection and monitoring of microfluidic samples with a wide range of possible sensor designs such as interferometers and resonators 1,2. Most of the existing RI sensing applications focus on biological materials in aqueous solutions in visible and IR frequencies, such as DNA hybridization and genome sequencing. At terahertz frequencies, applications include quality control, monitoring of industrial processes and sensing and detection applications involving nonpolar materials. Several potential designs for refractive index sensors in the terahertz regime exist, including photonic crystal waveguides 3, asymmetric split-ring resonators 4, and photonic band gap structures integrated into parallel-plate waveguides 5. Many of these designs are based on optical resonators such as rings or cavities. The resonant frequencies of these structures are dependent on the refractive index of the material in or around the resonator. By monitoring the shifts in resonant frequency the refractive index of a sample can be accurately measured and this in turn can be used to identify a material, monitor contamination or dilution, etc. The sensor design we use here is based on a simple parallel-plate waveguide 6,7. A rectangular groove machined into one face acts as a resonant cavity (Figures 1 and 2). When terahertz radiation is coupled into the waveguide and propagates in the lowest-order transverse-electric (TE1) mode, the result is a single strong resonant feature with a tunable resonant frequency that is dependent on the geometry of the groove 6,8. This groove can be filled with nonpolar liquid microfluidic samples which cause a shift in the observed resonant frequency that depends on the amount of liquid in the groove and its refractive index 9. Our technique has an advantage over other terahertz techniques in its simplicity, both in fabrication and implementation, since the procedure can be accomplished with standard laboratory equipment without the need for a clean room or any special fabrication or experimental techniques. It can also be easily expanded to multichannel operation by the incorporation of multiple grooves 10. In this video we will describe our complete experimental procedure, from the design of the sensor to the data analysis and determination of the sample refractive index.
Physics, Issue 66, Electrical Engineering, Computer Engineering, Terahertz radiation, sensing, microfluidic, refractive index sensor, waveguide, optical sensing
Play Button
Simulation, Fabrication and Characterization of THz Metamaterial Absorbers
Authors: James P. Grant, Iain J.H. McCrindle, David R.S. Cumming.
Institutions: University of Glasgow.
Metamaterials (MM), artificial materials engineered to have properties that may not be found in nature, have been widely explored since the first theoretical1 and experimental demonstration2 of their unique properties. MMs can provide a highly controllable electromagnetic response, and to date have been demonstrated in every technologically relevant spectral range including the optical3, near IR4, mid IR5 , THz6 , mm-wave7 , microwave8 and radio9 bands. Applications include perfect lenses10, sensors11, telecommunications12, invisibility cloaks13 and filters14,15. We have recently developed single band16, dual band17 and broadband18 THz metamaterial absorber devices capable of greater than 80% absorption at the resonance peak. The concept of a MM absorber is especially important at THz frequencies where it is difficult to find strong frequency selective THz absorbers19. In our MM absorber the THz radiation is absorbed in a thickness of ~ λ/20, overcoming the thickness limitation of traditional quarter wavelength absorbers. MM absorbers naturally lend themselves to THz detection applications, such as thermal sensors, and if integrated with suitable THz sources (e.g. QCLs), could lead to compact, highly sensitive, low cost, real time THz imaging systems.
Materials Science, Issue 70, Physics, Engineering, Metamaterial, terahertz, sensing, fabrication, clean room, simulation, FTIR, spectroscopy
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Play Button
Test Samples for Optimizing STORM Super-Resolution Microscopy
Authors: Daniel J. Metcalf, Rebecca Edwards, Neelam Kumarswami, Alex E. Knight.
Institutions: National Physical Laboratory.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the 'mislocalization' phenomenon.
Molecular Biology, Issue 79, Genetics, Bioengineering, Biomedical Engineering, Biophysics, Basic Protocols, HeLa Cells, Actin Cytoskeleton, Coated Vesicles, Receptor, Epidermal Growth Factor, Actins, Fluorescence, Endocytosis, Microscopy, STORM, super-resolution microscopy, nanoscopy, cell biology, fluorescence microscopy, test samples, resolution, actin filaments, fiducial markers, epidermal growth factor, cell, imaging
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
A Simple Stimulatory Device for Evoking Point-like Tactile Stimuli: A Searchlight for LFP to Spike Transitions
Authors: Antonio G. Zippo, Sara Nencini, Gian Carlo Caramenti, Maurizio Valente, Riccardo Storchi, Gabriele E.M. Biella.
Institutions: National Research Council, National Research Council, University of Manchester.
Current neurophysiological research has the aim to develop methodologies to investigate the signal route from neuron to neuron, namely in the transitions from spikes to Local Field Potentials (LFPs) and from LFPs to spikes. LFPs have a complex dependence on spike activity and their relation is still poorly understood1. The elucidation of these signal relations would be helpful both for clinical diagnostics (e.g. stimulation paradigms for Deep Brain Stimulation) and for a deeper comprehension of neural coding strategies in normal and pathological conditions (e.g. epilepsy, Parkinson disease, chronic pain). To this aim, one has to solve technical issues related to stimulation devices, stimulation paradigms and computational analyses. Therefore, a custom-made stimulation device was developed in order to deliver stimuli well regulated in space and time that does not incur in mechanical resonance. Subsequently, as an exemplification, a set of reliable LFP-spike relationships was extracted. The performance of the device was investigated by extracellular recordings, jointly spikes and LFP responses to the applied stimuli, from the rat Primary Somatosensory cortex. Then, by means of a multi-objective optimization strategy, a predictive model for spike occurrence based on LFPs was estimated. The application of this paradigm shows that the device is adequately suited to deliver high frequency tactile stimulation, outperforming common piezoelectric actuators. As a proof of the efficacy of the device, the following results were presented: 1) the timing and reliability of LFP responses well match the spike responses, 2) LFPs are sensitive to the stimulation history and capture not only the average response but also the trial-to-trial fluctuations in the spike activity and, finally, 3) by using the LFP signal it is possible to estimate a range of predictive models that capture different aspects of the spike activity.
Neuroscience, Issue 85, LFP, spike, tactile stimulus, Multiobjective function, Neuron, somatosensory cortex
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
How to Ignite an Atmospheric Pressure Microwave Plasma Torch without Any Additional Igniters
Authors: Martina Leins, Sandra Gaiser, Andreas Schulz, Matthias Walker, Uwe Schumacher, Thomas Hirth.
Institutions: University of Stuttgart.
This movie shows how an atmospheric pressure plasma torch can be ignited by microwave power with no additional igniters. After ignition of the plasma, a stable and continuous operation of the plasma is possible and the plasma torch can be used for many different applications. On one hand, the hot (3,600 K gas temperature) plasma can be used for chemical processes and on the other hand the cold afterglow (temperatures down to almost RT) can be applied for surface processes. For example chemical syntheses are interesting volume processes. Here the microwave plasma torch can be used for the decomposition of waste gases which are harmful and contribute to the global warming but are needed as etching gases in growing industry sectors like the semiconductor branch. Another application is the dissociation of CO2. Surplus electrical energy from renewable energy sources can be used to dissociate CO2 to CO and O2. The CO can be further processed to gaseous or liquid higher hydrocarbons thereby providing chemical storage of the energy, synthetic fuels or platform chemicals for the chemical industry. Applications of the afterglow of the plasma torch are the treatment of surfaces to increase the adhesion of lacquer, glue or paint, and the sterilization or decontamination of different kind of surfaces. The movie will explain how to ignite the plasma solely by microwave power without any additional igniters, e.g., electric sparks. The microwave plasma torch is based on a combination of two resonators — a coaxial one which provides the ignition of the plasma and a cylindrical one which guarantees a continuous and stable operation of the plasma after ignition. The plasma can be operated in a long microwave transparent tube for volume processes or shaped by orifices for surface treatment purposes.
Engineering, Issue 98, atmospheric pressure plasma, microwave plasma, plasma ignition, resonator structure, coaxial resonator, cylindrical resonator, plasma torch, stable plasma operation, continuous plasma operation, high speed camera
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.