JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Secure Obfuscation for Encrypted Group Signatures.
PUBLISHED: 07-14-2015
In recent years, group signature techniques are widely used in constructing privacy-preserving security schemes for various information systems. However, conventional techniques keep the schemes secure only in normal black-box attack contexts. In other words, these schemes suppose that (the implementation of) the group signature generation algorithm is running in a platform that is perfectly protected from various intrusions and attacks. As a complementary to existing studies, how to generate group signatures securely in a more austere security context, such as a white-box attack context, is studied in this paper. We use obfuscation as an approach to acquire a higher level of security. Concretely, we introduce a special group signature functionality-an encrypted group signature, and then provide an obfuscator for the proposed functionality. A series of new security notions for both the functionality and its obfuscator has been introduced. The most important one is the average-case secure virtual black-box property w.r.t. dependent oracles and restricted dependent oracles which captures the requirement of protecting the output of the proposed obfuscator against collision attacks from group members. The security notions fit for many other specialized obfuscators, such as obfuscators for identity-based signatures, threshold signatures and key-insulated signatures. Finally, the correctness and security of the proposed obfuscator have been proven. Thereby, the obfuscated encrypted group signature functionality can be applied to variants of privacy-preserving security schemes and enhance the security level of these schemes.
The relationship between patterns of neural activity and corresponding behavioral expression is difficult to establish in unrestrained animals. Traditional non-invasive methods require at least partially restrained research subjects, and they only allow identification of large numbers of simultaneously activated neurons. On the other hand, small ensembles of neurons or individual neurons can only be measured using single-cell recordings obtained from largely reduced preparations. Since the expression of natural behavior is limited in restrained and dissected animals, the underlying neural mechanisms that control such behavior are difficult to identify. Here, I present a non-invasive physiological technique that allows measuring neural circuit activation in freely behaving animals. Using a pair of wire electrodes inside a water-filled chamber, the bath electrodes record neural and muscular field potentials generated by juvenile crayfish during natural or experimentally evoked escape responses. The primary escape responses of crayfish are mediated by three different types of tail-flips which move the animals away from the point of stimulation. Each type of tail-flip is controlled by its own neural circuit; the two fastest and most powerful escape responses require activation of different sets of large “command” neurons. In combination with behavioral observations, the bath electrode recordings allow unambiguous identification of these neurons and the associated neural circuits. Thus activity of neural circuitry underlying naturally occurring behavior can be measured in unrestrained animals and in different behavioral contexts.
24 Related JoVE Articles!
Play Button
Inducing Plasticity of Astrocytic Receptors by Manipulation of Neuronal Firing Rates
Authors: Alison X. Xie, Kelli Lauderdale, Thomas Murphy, Timothy L. Myers, Todd A. Fiacco.
Institutions: University of California Riverside, University of California Riverside, University of California Riverside.
Close to two decades of research has established that astrocytes in situ and in vivo express numerous G protein-coupled receptors (GPCRs) that can be stimulated by neuronally-released transmitter. However, the ability of astrocytic receptors to exhibit plasticity in response to changes in neuronal activity has received little attention. Here we describe a model system that can be used to globally scale up or down astrocytic group I metabotropic glutamate receptors (mGluRs) in acute brain slices. Included are methods on how to prepare parasagittal hippocampal slices, construct chambers suitable for long-term slice incubation, bidirectionally manipulate neuronal action potential frequency, load astrocytes and astrocyte processes with fluorescent Ca2+ indicator, and measure changes in astrocytic Gq GPCR activity by recording spontaneous and evoked astrocyte Ca2+ events using confocal microscopy. In essence, a “calcium roadmap” is provided for how to measure plasticity of astrocytic Gq GPCRs. Applications of the technique for study of astrocytes are discussed. Having an understanding of how astrocytic receptor signaling is affected by changes in neuronal activity has important implications for both normal synaptic function as well as processes underlying neurological disorders and neurodegenerative disease.
Neuroscience, Issue 85, astrocyte, plasticity, mGluRs, neuronal Firing, electrophysiology, Gq GPCRs, Bolus-loading, calcium, microdomains, acute slices, Hippocampus, mouse
Play Button
In vivo Optogenetic Stimulation of the Rodent Central Nervous System
Authors: Michelle M. Sidor, Thomas J. Davidson, Kay M. Tye, Melissa R. Warden, Karl Diesseroth, Colleen A. McClung.
Institutions: University of Pittsburgh Medical Center, Stanford University, Massachusetts Institute of Technology, Cornell University, Stanford University.
The ability to probe defined neural circuits in awake, freely-moving animals with cell-type specificity, spatial precision, and high temporal resolution has been a long sought tool for neuroscientists in the systems-level search for the neural circuitry governing complex behavioral states. Optogenetics is a cutting-edge tool that is revolutionizing the field of neuroscience and represents one of the first systematic approaches to enable causal testing regarding the relation between neural signaling events and behavior. By combining optical and genetic approaches, neural signaling can be bi-directionally controlled through expression of light-sensitive ion channels (opsins) in mammalian cells. The current protocol describes delivery of specific wavelengths of light to opsin-expressing cells in deep brain structures of awake, freely-moving rodents for neural circuit modulation. Theoretical principles of light transmission as an experimental consideration are discussed in the context of performing in vivo optogenetic stimulation. The protocol details the design and construction of both simple and complex laser configurations and describes tethering strategies to permit simultaneous stimulation of multiple animals for high-throughput behavioral testing.
Neuroscience, Issue 95, optogenetics, rodent, behavior, opsin, channelrhodopsin, brain, fiber optics, laser, neural circuits
Play Button
Characterization Of Multi-layered Fish Scales (Atractosteus spatula) Using Nanoindentation, X-ray CT, FTIR, and SEM
Authors: Paul G. Allison, Rogie I. Rodriguez, Robert D. Moser, Brett A. Williams, Aimee R. Poda, Jennifer M. Seiter, Brandon J. Lafferty, Alan J. Kennedy, Mei Q. Chandler.
Institutions: U.S. Army Engineer Research and Development Center, University of Alabama, U.S. Army Engineer Research and Development Center.
The hierarchical architecture of protective biological materials such as mineralized fish scales, gastropod shells, ram’s horn, antlers, and turtle shells provides unique design principles with potentials for guiding the design of protective materials and systems in the future. Understanding the structure-property relationships for these material systems at the microscale and nanoscale where failure initiates is essential. Currently, experimental techniques such as nanoindentation, X-ray CT, and SEM provide researchers with a way to correlate the mechanical behavior with hierarchical microstructures of these material systems1-6. However, a well-defined standard procedure for specimen preparation of mineralized biomaterials is not currently available. In this study, the methods for probing spatially correlated chemical, structural, and mechanical properties of the multilayered scale of A. spatula using nanoindentation, FTIR, SEM, with energy-dispersive X-ray (EDX) microanalysis, and X-ray CT are presented.
Bioengineering, Issue 89, Atractosteus spatula, structure-property relation, nanoindentation, scan electron microscopy, X-ray computed tomography, Fourier transform infrared (FTIR) spectroscopy
Play Button
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Authors: Sara Tremblay, Vincent Beaulé, Sébastien Proulx, Louis-Philippe Lafleur, Julien Doyon, Małgorzata Marjańska, Hugo Théoret.
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33. To help improve this understanding, proton magnetic resonance spectroscopy (1H-MRS) can be used as it allows the in vivo quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41. In fact, a recent study demonstrated that 1H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
A Procedure for Implanting a Spinal Chamber for Longitudinal In Vivo Imaging of the Mouse Spinal Cord
Authors: Matthew J. Farrar, Chris B. Schaffer.
Institutions: Cornell University, Cornell University.
Studies in the mammalian neocortex have enabled unprecedented resolution of cortical structure, activity, and response to neurodegenerative insults by repeated, time-lapse in vivo imaging in live rodents. These studies were made possible by straightforward surgical procedures, which enabled optical access for a prolonged period of time without repeat surgical procedures. In contrast, analogous studies of the spinal cord have been previously limited to only a few imaging sessions, each of which required an invasive surgery. As previously described, we have developed a spinal chamber that enables continuous optical access for upwards of 8 weeks, preserves mechanical stability of the spinal column, is easily stabilized externally during imaging, and requires only a single surgery. Here, the design of the spinal chamber with its associated surgical implements is reviewed and the surgical procedure is demonstrated in detail. Briefly, this video will demonstrate the preparation of the surgical area and mouse for surgery, exposure of the spinal vertebra and appropriate tissue debridement, the delivery of the implant and vertebral clamping, the completion of the chamber, the removal of the delivery system, sealing of the skin, and finally, post-operative care. The procedure for chronic in vivo imaging using nonlinear microscopy will also be demonstrated. Finally, outcomes, limitations, typical variability, and a guide for troubleshooting are discussed.
Neuroscience, Issue 94, spinal cord, in vivo microscopy, multiphoton microscopy, animal surgery, fluorescence microscopy, biomedical optics
Play Button
Human Pluripotent Stem Cell Based Developmental Toxicity Assays for Chemical Safety Screening and Systems Biology Data Generation
Authors: Vaibhav Shinde, Stefanie Klima, Perumal Srinivasan Sureshkumar, Kesavan Meganathan, Smita Jagtap, Eugen Rempel, Jörg Rahnenführer, Jan Georg Hengstler, Tanja Waldmann, Jürgen Hescheler, Marcel Leist, Agapios Sachinidis.
Institutions: University of Cologne, University of Konstanz, Technical University of Dortmund, Technical University of Dortmund.
Efficient protocols to differentiate human pluripotent stem cells to various tissues in combination with -omics technologies opened up new horizons for in vitro toxicity testing of potential drugs. To provide a solid scientific basis for such assays, it will be important to gain quantitative information on the time course of development and on the underlying regulatory mechanisms by systems biology approaches. Two assays have therefore been tuned here for these requirements. In the UKK test system, human embryonic stem cells (hESC) (or other pluripotent cells) are left to spontaneously differentiate for 14 days in embryoid bodies, to allow generation of cells of all three germ layers. This system recapitulates key steps of early human embryonic development, and it can predict human-specific early embryonic toxicity/teratogenicity, if cells are exposed to chemicals during differentiation. The UKN1 test system is based on hESC differentiating to a population of neuroectodermal progenitor (NEP) cells for 6 days. This system recapitulates early neural development and predicts early developmental neurotoxicity and epigenetic changes triggered by chemicals. Both systems, in combination with transcriptome microarray studies, are suitable for identifying toxicity biomarkers. Moreover, they may be used in combination to generate input data for systems biology analysis. These test systems have advantages over the traditional toxicological studies requiring large amounts of animals. The test systems may contribute to a reduction of the costs for drug development and chemical safety evaluation. Their combination sheds light especially on compounds that may influence neurodevelopment specifically.
Developmental Biology, Issue 100, Human embryonic stem cells, developmental toxicity, neurotoxicity, neuroectodermal progenitor cells, immunoprecipitation, differentiation, cytotoxicity, embryopathy, embryoid body
Play Button
High-throughput Quantitative Real-time RT-PCR Assay for Determining Expression Profiles of Types I and III Interferon Subtypes
Authors: Lynnsey A. Renn, Terence C. Theisen, Maria B. Navarro, Viraj P. Mane, Lynnsie M. Schramm, Kevin D. Kirschman, Giulia Fabozzi, Philippa Hillyer, Montserrat Puig, Daniela Verthelyi, Ronald L. Rabin.
Institutions: US Food and Drug Administration, US Food and Drug Administration.
Described in this report is a qRT-PCR assay for the analysis of seventeen human IFN subtypes in a 384-well plate format that incorporates highly specific locked nucleic acid (LNA) and molecular beacon (MB) probes, transcript standards, automated multichannel pipetting, and plate drying. Determining expression among the type I interferons (IFN), especially the twelve IFN-α subtypes, is limited by their shared sequence identity; likewise, the sequences of the type III IFN, especially IFN-λ2 and -λ3, are highly similar. This assay provides a reliable, reproducible, and relatively inexpensive means to analyze the expression of the seventeen interferon subtype transcripts.
Immunology, Issue 97, Interferon, Innate Immunity, qRT-PCR Assay, Probes, Primers, Automated Pipetting
Play Button
The Double-H Maze: A Robust Behavioral Test for Learning and Memory in Rodents
Authors: Robert D. Kirch, Richard C. Pinnell, Ulrich G. Hofmann, Jean-Christophe Cassel.
Institutions: University Hospital Freiburg, UMR 7364 Université de Strasbourg, CNRS, Neuropôle de Strasbourg.
Spatial cognition research in rodents typically employs the use of maze tasks, whose attributes vary from one maze to the next. These tasks vary by their behavioral flexibility and required memory duration, the number of goals and pathways, and also the overall task complexity. A confounding feature in many of these tasks is the lack of control over the strategy employed by the rodents to reach the goal, e.g., allocentric (declarative-like) or egocentric (procedural) based strategies. The double-H maze is a novel water-escape memory task that addresses this issue, by allowing the experimenter to direct the type of strategy learned during the training period. The double-H maze is a transparent device, which consists of a central alleyway with three arms protruding on both sides, along with an escape platform submerged at the extremity of one of these arms. Rats can be trained using an allocentric strategy by alternating the start position in the maze in an unpredictable manner (see protocol 1; §4.7), thus requiring them to learn the location of the platform based on the available allothetic cues. Alternatively, an egocentric learning strategy (protocol 2; §4.8) can be employed by releasing the rats from the same position during each trial, until they learn the procedural pattern required to reach the goal. This task has been proven to allow for the formation of stable memory traces. Memory can be probed following the training period in a misleading probe trial, in which the starting position for the rats alternates. Following an egocentric learning paradigm, rats typically resort to an allocentric-based strategy, but only when their initial view on the extra-maze cues differs markedly from their original position. This task is ideally suited to explore the effects of drugs/perturbations on allocentric/egocentric memory performance, as well as the interactions between these two memory systems.
Behavior, Issue 101, Double-H maze, spatial memory, procedural memory, consolidation, allocentric, egocentric, habits, rodents, video tracking system
Play Button
Assessment of Social Cognition in Non-human Primates Using a Network of Computerized Automated Learning Device (ALDM) Test Systems
Authors: Joël Fagot, Yousri Marzouki, Pascal Huguet, Julie Gullstrand, Nicolas Claidière.
Institutions: Aix-Marseille University.
Fagot & Paleressompoulle1 and Fagot & Bonte2 have published an automated learning device (ALDM) for the study of cognitive abilities of monkeys maintained in semi-free ranging conditions. Data accumulated during the last five years have consistently demonstrated the efficiency of this protocol to investigate individual/physical cognition in monkeys, and have further shown that this procedure reduces stress level during animal testing3. This paper demonstrates that networks of ALDM can also be used to investigate different facets of social cognition and in-group expressed behaviors in monkeys, and describes three illustrative protocols developed for that purpose. The first study demonstrates how ethological assessments of social behavior and computerized assessments of cognitive performance could be integrated to investigate the effects of socially exhibited moods on the cognitive performance of individuals. The second study shows that batteries of ALDM running in parallel can provide unique information on the influence of the presence of others on task performance. Finally, the last study shows that networks of ALDM test units can also be used to study issues related to social transmission and cultural evolution. Combined together, these three studies demonstrate clearly that ALDM testing is a highly promising experimental tool for bridging the gap in the animal literature between research on individual cognition and research on social cognition.
Behavior, Issue 99, Baboon, automated learning device, cultural transmission, emotion, social facilitation, cognition, operant conditioning.
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Using Flatbed Scanners to Collect High-resolution Time-lapsed Images of the Arabidopsis Root Gravitropic Response
Authors: Halie C Smith, Devon J Niewohner, Grant D Dewey, Autumn M Longo, Tracy L Guy, Bradley R Higgins, Sarah B Daehling, Sarah C. Genrich, Christopher D Wentworth, Tessa L Durham Brooks.
Institutions: Doane College, Doane College.
Research efforts in biology increasingly require use of methodologies that enable high-volume collection of high-resolution data. A challenge laboratories can face is the development and attainment of these methods. Observation of phenotypes in a process of interest is a typical objective of research labs studying gene function and this is often achieved through image capture. A particular process that is amenable to observation using imaging approaches is the corrective growth of a seedling root that has been displaced from alignment with the gravity vector. Imaging platforms used to measure the root gravitropic response can be expensive, relatively low in throughput, and/or labor intensive. These issues have been addressed by developing a high-throughput image capture method using inexpensive, yet high-resolution, flatbed scanners. Using this method, images can be captured every few minutes at 4,800 dpi. The current setup enables collection of 216 individual responses per day. The image data collected is of ample quality for image analysis applications.
Basic Protocol, Issue 83, root gravitropism, Arabidopsis, high-throughput phenotyping, flatbed scanners, image analysis, undergraduate research
Play Button
3D-Neuronavigation In Vivo Through a Patient's Brain During a Spontaneous Migraine Headache
Authors: Alexandre F. DaSilva, Thiago D. Nascimento, Tiffany Love, Marcos F. DosSantos, Ilkka K. Martikainen, Chelsea M. Cummiford, Misty DeBoer, Sarah R. Lucas, MaryCatherine A. Bender, Robert A. Koeppe, Theodore Hall, Sean Petty, Eric Maslowski, Yolanda R. Smith, Jon-Kar Zubieta.
Institutions: University of Michigan School of Dentistry, University of Michigan School of Dentistry, University of Michigan, University of Michigan, University of Michigan, University of Michigan.
A growing body of research, generated primarily from MRI-based studies, shows that migraine appears to occur, and possibly endure, due to the alteration of specific neural processes in the central nervous system. However, information is lacking on the molecular impact of these changes, especially on the endogenous opioid system during migraine headaches, and neuronavigation through these changes has never been done. This study aimed to investigate, using a novel 3D immersive and interactive neuronavigation (3D-IIN) approach, the endogenous µ-opioid transmission in the brain during a migraine headache attack in vivo. This is arguably one of the most central neuromechanisms associated with pain regulation, affecting multiple elements of the pain experience and analgesia. A 36 year-old female, who has been suffering with migraine for 10 years, was scanned in the typical headache (ictal) and nonheadache (interictal) migraine phases using Positron Emission Tomography (PET) with the selective radiotracer [11C]carfentanil, which allowed us to measure µ-opioid receptor availability in the brain (non-displaceable binding potential - µOR BPND). The short-life radiotracer was produced by a cyclotron and chemical synthesis apparatus on campus located in close proximity to the imaging facility. Both PET scans, interictal and ictal, were scheduled during separate mid-late follicular phases of the patient's menstrual cycle. During the ictal PET session her spontaneous headache attack reached severe intensity levels; progressing to nausea and vomiting at the end of the scan session. There were reductions in µOR BPND in the pain-modulatory regions of the endogenous µ-opioid system during the ictal phase, including the cingulate cortex, nucleus accumbens (NAcc), thalamus (Thal), and periaqueductal gray matter (PAG); indicating that µORs were already occupied by endogenous opioids released in response to the ongoing pain. To our knowledge, this is the first time that changes in µOR BPND during a migraine headache attack have been neuronavigated using a novel 3D approach. This method allows for interactive research and educational exploration of a migraine attack in an actual patient's neuroimaging dataset.
Medicine, Issue 88, μ-opioid, opiate, migraine, headache, pain, Positron Emission Tomography, molecular neuroimaging, 3D, neuronavigation
Play Button
Lensless On-chip Imaging of Cells Provides a New Tool for High-throughput Cell-Biology and Medical Diagnostics
Authors: Onur Mudanyali, Anthony Erlinger, Sungkyu Seo, Ting-Wei Su, Derek Tseng, Aydogan Ozcan.
Institutions: University of California, Los Angeles, University of California, Los Angeles.
Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as ~18 cm2. Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of ~5 mm without the need for refocusing which corresponds to up to ~9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments.
Cellular Biology, Issue 34, LUCAS, lensfree imaging, on-chip imaging, point-of-care diagnostics, global health, cell-biology, telemedicine, wireless health, microscopy, red blood cells
Play Button
Behavioral Assessment of Manual Dexterity in Non-Human Primates
Authors: Eric Schmidlin, Mélanie Kaeser, Anne- Dominique Gindrat, Julie Savidan, Pauline Chatagny, Simon Badoud, Adjia Hamadjida, Marie-Laure Beaud, Thierry Wannier, Abderraouf Belhaj-Saif, Eric M. Rouiller.
Institutions: University of Fribourg.
The corticospinal (CS) tract is the anatomical support of the exquisite motor ability to skillfully manipulate small objects, a prerogative mainly of primates1. In case of lesion affecting the CS projection system at its origin (lesion of motor cortical areas) or along its trajectory (cervical cord lesion), there is a dramatic loss of manual dexterity (hand paralysis), as seen in some tetraplegic or hemiplegic patients. Although there is some spontaneous functional recovery after such lesion, it remains very limited in the adult. Various therapeutic strategies are presently proposed (e.g. cell therapy, neutralization of inhibitory axonal growth molecules, application of growth factors, etc), which are mostly developed in rodents. However, before clinical application, it is often recommended to test the feasibility, efficacy, and security of the treatment in non-human primates. This is especially true when the goal is to restore manual dexterity after a lesion of the central nervous system, as the organization of the motor system of rodents is different from that of primates1,2. Macaque monkeys are illustrated here as a suitable behavioral model to quantify manual dexterity in primates, to reflect the deficits resulting from lesion of the motor cortex or cervical cord for instance, measure the extent of spontaneous functional recovery and, when a treatment is applied, evaluate how much it can enhance the functional recovery. The behavioral assessment of manual dexterity is based on four distinct, complementary, reach and grasp manual tasks (use of precision grip to grasp pellets), requiring an initial training of adult macaque monkeys. The preparation of the animals is demonstrated, as well as the positioning with respect to the behavioral set-up. The performance of a typical monkey is illustrated for each task. The collection and analysis of relevant parameters reflecting precise hand manipulation, as well as the control of force, are explained and demonstrated with representative results. These data are placed then in a broader context, showing how the behavioral data can be exploited to investigate the impact of a spinal cord lesion or of a lesion of the motor cortex and to what extent a treatment may enhance the spontaneous functional recovery, by comparing different groups of monkeys (treated versus sham treated for instance). Advantages and limitations of the behavioral tests are discussed. The present behavioral approach is in line with previous reports emphasizing the pertinence of the non-human primate model in the context of nervous system diseases2,3.
Neuroscience, Issue 57, monkey, hand, spinal cord lesion, cerebral cortex lesion, functional recovery
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
Play Button
GC-based Detection of Aldononitrile Acetate Derivatized Glucosamine and Muramic Acid for Microbial Residue Determination in Soil
Authors: Chao Liang, Harry W. Read, Teri C. Balser.
Institutions: University of Wisconsin, Madison, University of Wisconsin, Madison, University of Florida .
Quantitative approaches to characterizing microorganisms are crucial for a broader understanding of the microbial status and function within ecosystems. Current strategies for microbial analysis include both traditional laboratory culture-dependent techniques and those based on direct extraction and determination of certain biomarkers1, 2. Few among the diversity of microbial species inhabiting soil can be cultured, so culture-dependent methods introduce significant biases, a limitation absent in biomarker analysis. The glucosamine, mannosamine, galactosamine and muramic acid have been well served as measures of both the living and dead microbial mass, of these the glucosamine (most abundant) and muramic acid (uniquely from bacterial cell) are most important constituents in the soil systems3, 4. However, the lack of knowledge on the analysis restricts the wide popularization among scientific peers. Among all existing analytical methods, derivatization to aldononitrile acetates followed by GC-based analysis has emerged as a good option with respect to optimally balancing precision, sensitivity, simplicity, good chromatographic separation, and stability upon sample storage5. Here, we present a detailed protocol for a reliable and relatively simple analysis of glucosamine and muramic acid from soil after their conversion to aldononitrile acetates. The protocol mainly comprises four steps: acid digestion, sample purification, derivatization and GC determination. The step-by-step procedure is modified according to former publications6, 7. In addition, we present a strategy to structurally validate the molecular ion of the derivative and its ion fragments formed upon electron ionization. We applied GC-EI-MS-SIM, LC-ESI-TOF-MS and isotopically labeled reagents to determine the molecular weight of aldononitrile acetate derivatized glucosamine and muramic acid; we used the mass shift of isotope-labeled derivatives in the ion spectrum to investigate ion fragments of each derivatives8. In addition to the theoretical elucidation, the validation of molecular ion of the derivative and its ion fragments will be useful to researchers using δ13C or ion fragments of these biomarkers in biogeochemical studies9, 10.
Molecular Biology, Issue 63, Glucosamine, muramic acid, microbial residue, aldononitrile acetate derivatization, isotope incorporation, ion structure, electron ionization, GC, MS
Play Button
Protease- and Acid-catalyzed Labeling Workflows Employing 18O-enriched Water
Authors: Diana Klingler, Markus Hardt.
Institutions: Boston Biomedical Research Institute.
Stable isotopes are essential tools in biological mass spectrometry. Historically, 18O-stable isotopes have been extensively used to study the catalytic mechanisms of proteolytic enzymes1-3. With the advent of mass spectrometry-based proteomics, the enzymatically-catalyzed incorporation of 18O-atoms from stable isotopically enriched water has become a popular method to quantitatively compare protein expression levels (reviewed by Fenselau and Yao4, Miyagi and Rao5 and Ye et al.6). 18O-labeling constitutes a simple and low-cost alternative to chemical (e.g. iTRAQ, ICAT) and metabolic (e.g. SILAC) labeling techniques7. Depending on the protease utilized, 18O-labeling can result in the incorporation of up to two 18O-atoms in the C-terminal carboxyl group of the cleavage product3. The labeling reaction can be subdivided into two independent processes, the peptide bond cleavage and the carboxyl oxygen exchange reaction8. In our PALeO (protease-assisted labeling employing 18O-enriched water) adaptation of enzymatic 18O-labeling, we utilized 50% 18O-enriched water to yield distinctive isotope signatures. In combination with high-resolution matrix-assisted laser desorption ionization time-of-flight tandem mass spectrometry (MALDI-TOF/TOF MS/MS), the characteristic isotope envelopes can be used to identify cleavage products with a high level of specificity. We previously have used the PALeO-methodology to detect and characterize endogenous proteases9 and monitor proteolytic reactions10-11. Since PALeO encodes the very essence of the proteolytic cleavage reaction, the experimental setup is simple and biochemical enrichment steps of cleavage products can be circumvented. The PALeO-method can easily be extended to (i) time course experiments that monitor the dynamics of proteolytic cleavage reactions and (ii) the analysis of proteolysis in complex biological samples that represent physiological conditions. PALeO-TimeCourse experiments help identifying rate-limiting processing steps and reaction intermediates in complex proteolytic pathway reactions. Furthermore, the PALeO-reaction allows us to identify proteolytic enzymes such as the serine protease trypsin that is capable to rebind its cleavage products and catalyze the incorporation of a second 18O-atom. Such "double-labeling" enzymes can be used for postdigestion 18O-labeling, in which peptides are exclusively labeled by the carboxyl oxygen exchange reaction. Our third strategy extends labeling employing 18O-enriched water beyond enzymes and uses acidic pH conditions to introduce 18O-stable isotope signatures into peptides.
Biochemistry, Issue 72, Molecular Biology, Proteins, Proteomics, Chemistry, Physics, MALDI-TOF mass spectrometry, proteomics, proteolysis, quantification, stable isotope labeling, labeling, catalyst, peptides, 18-O enriched water
Play Button
Peering into the Dynamics of Social Interactions: Measuring Play Fighting in Rats
Authors: Brett T. Himmler, Vivien C. Pellis, Sergio M. Pellis.
Institutions: University of Lethbridge.
Play fighting in the rat involves attack and defense of the nape of the neck, which if contacted, is gently nuzzled with the snout. Because the movements of one animal are countered by the actions of its partner, play fighting is a complex, dynamic interaction. This dynamic complexity raises methodological problems about what to score for experimental studies. We present a scoring schema that is sensitive to the correlated nature of the actions performed. The frequency of play fighting can be measured by counting the number of playful nape attacks occurring per unit time. However, playful defense, as it can only occur in response to attack, is necessarily a contingent measure that is best measured as a percentage (#attacks defended/total # attacks X 100%). How a particular attack is defended against can involve one of several tactics, and these are contingent on defense having taken place; consequently, the type of defense is also best expressed contingently as a percentage. Two experiments illustrate how these measurements can be used to detect the effect of brain damage on play fighting even when there is no effect on overall playfulness. That is, the schema presented here is designed to detect and evaluate changes in the content of play following an experimental treatment.
Neuroscience, Issue 71, Neurobiology, Behavior, Psychology, Anatomy, Physiology, Medicine, Play behavior, play, fighting, wrestling, grooming, allogrooming, social interaction, rat, behavioral analysis, animal model
Play Button
Analysis of Targeted Viral Protein Nanoparticles Delivered to HER2+ Tumors
Authors: Jae Youn Hwang, Daniel L. Farkas, Lali K. Medina-Kauwe.
Institutions: University of Southern California, Cedars-Sinai Medical Center, University of California, Los Angeles.
The HER2+ tumor-targeted nanoparticle, HerDox, exhibits tumor-preferential accumulation and tumor-growth ablation in an animal model of HER2+ cancer. HerDox is formed by non-covalent self-assembly of a tumor targeted cell penetration protein with the chemotherapy agent, doxorubicin, via a small nucleic acid linker. A combination of electrophilic, intercalation, and oligomerization interactions facilitate self-assembly into round 10-20 nm particles. HerDox exhibits stability in blood as well as in extended storage at different temperatures. Systemic delivery of HerDox in tumor-bearing mice results in tumor-cell death with no detectable adverse effects to non-tumor tissue, including the heart and liver (which undergo marked damage by untargeted doxorubicin). HER2 elevation facilitates targeting to cells expressing the human epidermal growth factor receptor, hence tumors displaying elevated HER2 levels exhibit greater accumulation of HerDox compared to cells expressing lower levels, both in vitro and in vivo. Fluorescence intensity imaging combined with in situ confocal and spectral analysis has allowed us to verify in vivo tumor targeting and tumor cell penetration of HerDox after systemic delivery. Here we detail our methods for assessing tumor targeting via multimode imaging after systemic delivery.
Biomedical Engineering, Issue 76, Cancer Biology, Medicine, Bioengineering, Molecular Biology, Cellular Biology, Biochemistry, Nanotechnology, Nanomedicine, Drug Delivery Systems, Molecular Imaging, optical imaging devices (design and techniques), HerDox, Nanoparticle, Tumor, Targeting, Self-Assembly, Doxorubicin, Human Epidermal Growth Factor, HER, HER2+, Receptor, mice, animal model, tumors, imaging
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Phage Phenomics: Physiological Approaches to Characterize Novel Viral Proteins
Authors: Savannah E. Sanchez, Daniel A. Cuevas, Jason E. Rostron, Tiffany Y. Liang, Cullen G. Pivaroff, Matthew R. Haynes, Jim Nulton, Ben Felts, Barbara A. Bailey, Peter Salamon, Robert A. Edwards, Alex B. Burgin, Anca M. Segall, Forest Rohwer.
Institutions: San Diego State University, San Diego State University, San Diego State University, San Diego State University, San Diego State University, Argonne National Laboratory, Broad Institute.
Current investigations into phage-host interactions are dependent on extrapolating knowledge from (meta)genomes. Interestingly, 60 - 95% of all phage sequences share no homology to current annotated proteins. As a result, a large proportion of phage genes are annotated as hypothetical. This reality heavily affects the annotation of both structural and auxiliary metabolic genes. Here we present phenomic methods designed to capture the physiological response(s) of a selected host during expression of one of these unknown phage genes. Multi-phenotype Assay Plates (MAPs) are used to monitor the diversity of host substrate utilization and subsequent biomass formation, while metabolomics provides bi-product analysis by monitoring metabolite abundance and diversity. Both tools are used simultaneously to provide a phenotypic profile associated with expression of a single putative phage open reading frame (ORF). Representative results for both methods are compared, highlighting the phenotypic profile differences of a host carrying either putative structural or metabolic phage genes. In addition, the visualization techniques and high throughput computational pipelines that facilitated experimental analysis are presented.
Immunology, Issue 100, phenomics, phage, viral metagenome, Multi-phenotype Assay Plates (MAPs), continuous culture, metabolomics
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.