JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Parameter identification of robot manipulators: a heuristic particle swarm search approach.
.
PLoS ONE
PUBLISHED: 06-04-2015
Parameter identification of robot manipulators is an indispensable pivotal process of achieving accurate dynamic robot models. Since these kinetic models are highly nonlinear, it is not easy to tackle the matter of identifying their parameters. To solve the difficulty effectively, we herewith present an intelligent approach, namely, a heuristic particle swarm optimization (PSO) algorithm, which we call the elitist learning strategy (ELS) and proportional integral derivative (PID) controller hybridized PSO approach (ELPIDSO). A specified PID controller is designed to improve particles' local and global positions information together with ELS. Parameter identification of robot manipulators is conducted for performance evaluation of our proposed approach. Experimental results clearly indicate the following findings: Compared with standard PSO (SPSO) algorithm, ELPIDSO has improved a lot. It not only enhances the diversity of the swarm, but also features better search effectiveness and efficiency in solving practical optimization problems. Accordingly, ELPIDSO is superior to least squares (LS) method, genetic algorithm (GA), and SPSO algorithm in estimating the parameters of the kinetic models of robot manipulators.
Authors: Jacopo Tessadori, Michela Chiappalone.
Published: 03-02-2015
ABSTRACT
Information coding in the Central Nervous System (CNS) remains unexplored. There is mounting evidence that, even at a very low level, the representation of a given stimulus might be dependent on context and history. If this is actually the case, bi-directional interactions between the brain (or if need be a reduced model of it) and sensory-motor system can shed a light on how encoding and decoding of information is performed. Here an experimental system is introduced and described in which the activity of a neuronal element (i.e., a network of neurons extracted from embryonic mammalian hippocampi) is given context and used to control the movement of an artificial agent, while environmental information is fed back to the culture as a sequence of electrical stimuli. This architecture allows a quick selection of diverse encoding, decoding, and learning algorithms to test different hypotheses on the computational properties of neuronal networks.
24 Related JoVE Articles!
Play Button
Semi-automated Optical Heartbeat Analysis of Small Hearts
Authors: Karen Ocorr, Martin Fink, Anthony Cammarato, Sanford I. Bernstein, Rolf Bodmer.
Institutions: The Sanford Burnham Institute for Medical Research, The Sanford Burnham Institute for Medical Research, San Diego State University.
We have developed a method for analyzing high speed optical recordings from Drosophila, zebrafish and embryonic mouse hearts (Fink, et. al., 2009). Our Semi-automatic Optical Heartbeat Analysis (SOHA) uses a novel movement detection algorithm that is able to detect cardiac movements associated with individual contractile and relaxation events. The program provides a host of physiologically relevant readouts including systolic and diastolic intervals, heart rate, as well as qualitative and quantitative measures of heartbeat arrhythmicity. The program also calculates heart diameter measurements during both diastole and systole from which fractional shortening and fractional area changes are calculated. Output is provided as a digital file compatible with most spreadsheet programs. Measurements are made for every heartbeat in a record increasing the statistical power of the output. We demonstrate each of the steps where user input is required and show the application of our methodology to the analysis of heart function in all three genetically tractable heart models.
Physiology, Issue 31, Drosophila, zebrafish, mouse, heart, myosin, dilated, restricted, cardiomyopathy, KCNQ, movement detection
1435
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Authors: Natalie J. Saez, Hervé Nozach, Marilyne Blemont, Renaud Vincentelli.
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli) is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment. Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
51464
Play Button
Scalable High Throughput Selection From Phage-displayed Synthetic Antibody Libraries
Authors: Shane Miersch, Zhijian Li, Rachel Hanna, Megan E. McLaughlin, Michael Hornsby, Tet Matsuguchi, Marcin Paduch, Annika Sääf, Jim Wells, Shohei Koide, Anthony Kossiakoff, Sachdev S. Sidhu.
Institutions: The Recombinant Antibody Network, University of Toronto, University of California, San Francisco at Mission Bay, The University of Chicago.
The demand for antibodies that fulfill the needs of both basic and clinical research applications is high and will dramatically increase in the future. However, it is apparent that traditional monoclonal technologies are not alone up to this task. This has led to the development of alternate methods to satisfy the demand for high quality and renewable affinity reagents to all accessible elements of the proteome. Toward this end, high throughput methods for conducting selections from phage-displayed synthetic antibody libraries have been devised for applications involving diverse antigens and optimized for rapid throughput and success. Herein, a protocol is described in detail that illustrates with video demonstration the parallel selection of Fab-phage clones from high diversity libraries against hundreds of targets using either a manual 96 channel liquid handler or automated robotics system. Using this protocol, a single user can generate hundreds of antigens, select antibodies to them in parallel and validate antibody binding within 6-8 weeks. Highlighted are: i) a viable antigen format, ii) pre-selection antigen characterization, iii) critical steps that influence the selection of specific and high affinity clones, and iv) ways of monitoring selection effectiveness and early stage antibody clone characterization. With this approach, we have obtained synthetic antibody fragments (Fabs) to many target classes including single-pass membrane receptors, secreted protein hormones, and multi-domain intracellular proteins. These fragments are readily converted to full-length antibodies and have been validated to exhibit high affinity and specificity. Further, they have been demonstrated to be functional in a variety of standard immunoassays including Western blotting, ELISA, cellular immunofluorescence, immunoprecipitation and related assays. This methodology will accelerate antibody discovery and ultimately bring us closer to realizing the goal of generating renewable, high quality antibodies to the proteome.
Immunology, Issue 95, Bacteria, Viruses, Amino Acids, Peptides, and Proteins, Nucleic Acids, Nucleotides, and Nucleosides, Life Sciences (General), phage display, synthetic antibodies, high throughput, antibody selection, scalable methodology
51492
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Using Insect Electroantennogram Sensors on Autonomous Robots for Olfactory Searches
Authors: Dominique Martinez, Lotfi Arhidi, Elodie Demondion, Jean-Baptiste Masson, Philippe Lucas.
Institutions: Centre National de la Recherche Scientifique (CNRS), Institut d'Ecologie et des Sciences de l'Environnement de Paris, Institut Pasteur.
Robots designed to track chemical leaks in hazardous industrial facilities1 or explosive traces in landmine fields2 face the same problem as insects foraging for food or searching for mates3: the olfactory search is constrained by the physics of turbulent transport4. The concentration landscape of wind borne odors is discontinuous and consists of sporadically located patches. A pre-requisite to olfactory search is that intermittent odor patches are detected. Because of its high speed and sensitivity5-6, the olfactory organ of insects provides a unique opportunity for detection. Insect antennae have been used in the past to detect not only sex pheromones7 but also chemicals that are relevant to humans, e.g., volatile compounds emanating from cancer cells8 or toxic and illicit substances9-11. We describe here a protocol for using insect antennae on autonomous robots and present a proof of concept for tracking odor plumes to their source. The global response of olfactory neurons is recorded in situ in the form of electroantennograms (EAGs). Our experimental design, based on a whole insect preparation, allows stable recordings within a working day. In comparison, EAGs on excised antennae have a lifetime of 2 hr. A custom hardware/software interface was developed between the EAG electrodes and a robot. The measurement system resolves individual odor patches up to 10 Hz, which exceeds the time scale of artificial chemical sensors12. The efficiency of EAG sensors for olfactory searches is further demonstrated in driving the robot toward a source of pheromone. By using identical olfactory stimuli and sensors as in real animals, our robotic platform provides a direct means for testing biological hypotheses about olfactory coding and search strategies13. It may also prove beneficial for detecting other odorants of interests by combining EAGs from different insect species in a bioelectronic nose configuration14 or using nanostructured gas sensors that mimic insect antennae15.
Neuroscience, Issue 90, robotics, electroantennogram, EAG, gas sensor, electronic nose, olfactory search, surge and casting, moth, insect, olfaction, neuron
51704
Play Button
Averaging of Viral Envelope Glycoprotein Spikes from Electron Cryotomography Reconstructions using Jsubtomo
Authors: Juha T. Huiskonen, Marie-Laure Parsy, Sai Li, David Bitto, Max Renner, Thomas A. Bowden.
Institutions: University of Oxford.
Enveloped viruses utilize membrane glycoproteins on their surface to mediate entry into host cells. Three-dimensional structural analysis of these glycoprotein ‘spikes’ is often technically challenging but important for understanding viral pathogenesis and in drug design. Here, a protocol is presented for viral spike structure determination through computational averaging of electron cryo-tomography data. Electron cryo-tomography is a technique in electron microscopy used to derive three-dimensional tomographic volume reconstructions, or tomograms, of pleomorphic biological specimens such as membrane viruses in a near-native, frozen-hydrated state. These tomograms reveal structures of interest in three dimensions, albeit at low resolution. Computational averaging of sub-volumes, or sub-tomograms, is necessary to obtain higher resolution detail of repeating structural motifs, such as viral glycoprotein spikes. A detailed computational approach for aligning and averaging sub-tomograms using the Jsubtomo software package is outlined. This approach enables visualization of the structure of viral glycoprotein spikes to a resolution in the range of 20-40 Å and study of the study of higher order spike-to-spike interactions on the virion membrane. Typical results are presented for Bunyamwera virus, an enveloped virus from the family Bunyaviridae. This family is a structurally diverse group of pathogens posing a threat to human and animal health.
Immunology, Issue 92, electron cryo-microscopy, cryo-electron microscopy, electron cryo-tomography, cryo-electron tomography, glycoprotein spike, enveloped virus, membrane virus, structure, subtomogram, averaging
51714
Play Button
Preparation, Imaging, and Quantification of Bacterial Surface Motility Assays
Authors: Nydia Morales-Soto, Morgen E. Anyan, Anne E. Mattingly, Chinedu S. Madukoma, Cameron W. Harvey, Mark Alber, Eric Déziel, Daniel B. Kearns, Joshua D. Shrout.
Institutions: University of Notre Dame, University of Notre Dame, University of Notre Dame, INRS-Institut Armand-Frappier, Indiana University, University of Notre Dame.
Bacterial surface motility, such as swarming, is commonly examined in the laboratory using plate assays that necessitate specific concentrations of agar and sometimes inclusion of specific nutrients in the growth medium. The preparation of such explicit media and surface growth conditions serves to provide the favorable conditions that allow not just bacterial growth but coordinated motility of bacteria over these surfaces within thin liquid films. Reproducibility of swarm plate and other surface motility plate assays can be a major challenge. Especially for more “temperate swarmers” that exhibit motility only within agar ranges of 0.4%-0.8% (wt/vol), minor changes in protocol or laboratory environment can greatly influence swarm assay results. “Wettability”, or water content at the liquid-solid-air interface of these plate assays, is often a key variable to be controlled. An additional challenge in assessing swarming is how to quantify observed differences between any two (or more) experiments. Here we detail a versatile two-phase protocol to prepare and image swarm assays. We include guidelines to circumvent the challenges commonly associated with swarm assay media preparation and quantification of data from these assays. We specifically demonstrate our method using bacteria that express fluorescent or bioluminescent genetic reporters like green fluorescent protein (GFP), luciferase (lux operon), or cellular stains to enable time-lapse optical imaging. We further demonstrate the ability of our method to track competing swarming species in the same experiment.
Microbiology, Issue 98, Surface motility, Swarming, Imaging, Pseudomonas aeruginosa, Salmonella Typhimurium, Bacillus subtilis, Myxococcus xanthus, Flagella
52338
Play Button
High-throughput Screening for Chemical Modulators of Post-transcriptionally Regulated Genes
Authors: Viktoryia Sidarovich, Valentina Adami, Alessandro Quattrone.
Institutions: University of Trento, University of Trento.
Both transcriptional and post-transcriptional regulation have a profound impact on genes expression. However, commonly adopted cell-based screening assays focus on transcriptional regulation, being essentially aimed at the identification of promoter-targeting molecules. As a result, post-transcriptional mechanisms are largely uncovered by gene expression targeted drug development. Here we describe a cell-based assay aimed at investigating the role of the 3' untranslated region (3’ UTR) in the modulation of the fate of its mRNA, and at identifying compounds able to modify it. The assay is based on the use of a luciferase reporter construct containing the 3’ UTR of a gene of interest stably integrated into a disease-relevant cell line. The protocol is divided into two parts, with the initial focus on the primary screening aimed at the identification of molecules affecting luciferase activity after 24 hr of treatment. The second part of the protocol describes the counter-screening necessary to discriminate compounds modulating luciferase activity specifically through the 3’ UTR. In addition to the detailed protocol and representative results, we provide important considerations about the assay development and the validation of the hit(s) on the endogenous target. The described cell-based reporter gene assay will allow scientists to identify molecules modulating protein levels via post-transcriptional mechanisms dependent on a 3’ UTR.
Cellular Biology, Issue 97, Post-transcriptional control, 3' UTR, high-throughput screening, cell-based assay, reporter, luciferase, small molecules
52568
Play Button
Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System
Authors: Michael K. Conway, Michael J. Gerger, Erin E. Balay, Rachel O'Connell, Seth Hanson, Neil J. Daily, Tetsuro Wakatsuki.
Institutions: InvivoSciences, Inc., Gilson, Inc..
Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.
Developmental Biology, Issue 99, iPSC, high-throughput, robotic, liquid-handling, scalable, stem cell, automated stem cell culture, 96-well
52755
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
An Experimental Platform to Study the Closed-loop Performance of Brain-machine Interfaces
Authors: Naveed Ejaz, Kris D. Peterson, Holger G. Krapp.
Institutions: Imperial College London.
The non-stationary nature and variability of neuronal signals is a fundamental problem in brain-machine interfacing. We developed a brain-machine interface to assess the robustness of different control-laws applied to a closed-loop image stabilization task. Taking advantage of the well-characterized fly visuomotor pathway we record the electrical activity from an identified, motion-sensitive neuron, H1, to control the yaw rotation of a two-wheeled robot. The robot is equipped with 2 high-speed video cameras providing visual motion input to a fly placed in front of 2 CRT computer monitors. The activity of the H1 neuron indicates the direction and relative speed of the robot's rotation. The neural activity is filtered and fed back into the steering system of the robot by means of proportional and proportional/adaptive control. Our goal is to test and optimize the performance of various control laws under closed-loop conditions for a broader application also in other brain machine interfaces.
Neuroscience, Issue 49, Stabilization reflexes, Sensorimotor control, Adaptive control, Insect vision
1677
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Haptic/Graphic Rehabilitation: Integrating a Robot into a Virtual Environment Library and Applying it to Stroke Therapy
Authors: Ian Sharp, James Patton, Molly Listenberger, Emily Case.
Institutions: University of Illinois at Chicago and Rehabilitation Institute of Chicago, Rehabilitation Institute of Chicago.
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
Bioengineering, Issue 54, robotics, haptics, virtual reality, wrapper class, rehabilitation robotics, neural engineering, H3DAPI, C++
3007
Play Button
Adaptation of a Haptic Robot in a 3T fMRI
Authors: Joseph Snider, Markus Plank, Larry May, Thomas T. Liu, Howard Poizner.
Institutions: University of California, University of California, University of California.
Functional magnetic resonance imaging (fMRI) provides excellent functional brain imaging via the BOLD signal 1 with advantages including non-ionizing radiation, millimeter spatial accuracy of anatomical and functional data 2, and nearly real-time analyses 3. Haptic robots provide precise measurement and control of position and force of a cursor in a reasonably confined space. Here we combine these two technologies to allow precision experiments involving motor control with haptic/tactile environment interaction such as reaching or grasping. The basic idea is to attach an 8 foot end effecter supported in the center to the robot 4 allowing the subject to use the robot, but shielding it and keeping it out of the most extreme part of the magnetic field from the fMRI machine (Figure 1). The Phantom Premium 3.0, 6DoF, high-force robot (SensAble Technologies, Inc.) is an excellent choice for providing force-feedback in virtual reality experiments 5, 6, but it is inherently non-MR safe, introduces significant noise to the sensitive fMRI equipment, and its electric motors may be affected by the fMRI's strongly varying magnetic field. We have constructed a table and shielding system that allows the robot to be safely introduced into the fMRI environment and limits both the degradation of the fMRI signal by the electrically noisy motors and the degradation of the electric motor performance by the strongly varying magnetic field of the fMRI. With the shield, the signal to noise ratio (SNR: mean signal/noise standard deviation) of the fMRI goes from a baseline of ˜380 to ˜330, and ˜250 without the shielding. The remaining noise appears to be uncorrelated and does not add artifacts to the fMRI of a test sphere (Figure 2). The long, stiff handle allows placement of the robot out of range of the most strongly varying parts of the magnetic field so there is no significant effect of the fMRI on the robot. The effect of the handle on the robot's kinematics is minimal since it is lightweight (˜2.6 lbs) but extremely stiff 3/4" graphite and well balanced on the 3DoF joint in the middle. The end result is an fMRI compatible, haptic system with about 1 cubic foot of working space, and, when combined with virtual reality, it allows for a new set of experiments to be performed in the fMRI environment including naturalistic reaching, passive displacement of the limb and haptic perception, adaptation learning in varying force fields, or texture identification 5, 6.
Bioengineering, Issue 56, neuroscience, haptic robot, fMRI, MRI, pointing
3364
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Play Button
Use of a Robot for High-throughput Crystallization of Membrane Proteins in Lipidic Mesophases
Authors: Dianfan Li, Coilín Boland, Kilian Walsh, Martin Caffrey.
Institutions: Trinity College Dublin .
Structure-function studies of membrane proteins greatly benefit from having available high-resolution 3-D structures of the type provided through macromolecular X-ray crystallography (MX). An essential ingredient of MX is a steady supply of ideally diffraction-quality crystals. The in meso or lipidic cubic phase (LCP) method for crystallizing membrane proteins is one of several methods available for crystallizing membrane proteins. It makes use of a bicontinuous mesophase in which to grow crystals. As a method, it has had some spectacular successes of late and has attracted much attention with many research groups now interested in using it. One of the challenges associated with the method is that the hosting mesophase is extremely viscous and sticky, reminiscent of a thick toothpaste. Thus, dispensing it manually in a reproducible manner in small volumes into crystallization wells requires skill, patience and a steady hand. A protocol for doing just that was developed in the Membrane Structural & Functional Biology (MS&FB) Group1-3. JoVE video articles describing the method are available1,4. The manual approach for setting up in meso trials has distinct advantages with specialty applications, such as crystal optimization and derivatization. It does however suffer from being a low throughput method. Here, we demonstrate a protocol for performing in meso crystallization trials robotically. A robot offers the advantages of speed, accuracy, precision, miniaturization and being able to work continuously for extended periods under what could be regarded as hostile conditions such as in the dark, in a reducing atmosphere or at low or high temperatures. An in meso robot, when used properly, can greatly improve the productivity of membrane protein structure and function research by facilitating crystallization which is one of the slow steps in the overall structure determination pipeline. In this video article, we demonstrate the use of three commercially available robots that can dispense the viscous and sticky mesophase integral to in meso crystallogenesis. The first robot was developed in the MS&FB Group5,6. The other two have recently become available and are included here for completeness. An overview of the protocol covered in this article is presented in Figure 1. All manipulations were performed at room temperature (~20 °C) under ambient conditions.
Materials Science, Issue 67, automation, crystallization, glass sandwich plates, high-throughput, in meso, lipidic cubic phase, lipidic mesophases, macromolecular X-ray crystallography, membrane protein, receptor, robot
4000
Play Button
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Authors: Sergey Rabotyagov, Todd Campbell, Adriana Valcu, Philip Gassman, Manoj Jha, Keith Schilling, Calvin Wolter, Catherine Kling.
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e., lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.,5,12,20) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization. Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7 with a multiobjective evolutionary algorithm SPEA226, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
4009
Play Button
A Novel Bayesian Change-point Algorithm for Genome-wide Analysis of Diverse ChIPseq Data Types
Authors: Haipeng Xing, Willey Liao, Yifan Mo, Michael Q. Zhang.
Institutions: Stony Brook University, Cold Spring Harbor Laboratory, University of Texas at Dallas.
ChIPseq is a widely used technique for investigating protein-DNA interactions. Read density profiles are generated by using next-sequencing of protein-bound DNA and aligning the short reads to a reference genome. Enriched regions are revealed as peaks, which often differ dramatically in shape, depending on the target protein1. For example, transcription factors often bind in a site- and sequence-specific manner and tend to produce punctate peaks, while histone modifications are more pervasive and are characterized by broad, diffuse islands of enrichment2. Reliably identifying these regions was the focus of our work. Algorithms for analyzing ChIPseq data have employed various methodologies, from heuristics3-5 to more rigorous statistical models, e.g. Hidden Markov Models (HMMs)6-8. We sought a solution that minimized the necessity for difficult-to-define, ad hoc parameters that often compromise resolution and lessen the intuitive usability of the tool. With respect to HMM-based methods, we aimed to curtail parameter estimation procedures and simple, finite state classifications that are often utilized. Additionally, conventional ChIPseq data analysis involves categorization of the expected read density profiles as either punctate or diffuse followed by subsequent application of the appropriate tool. We further aimed to replace the need for these two distinct models with a single, more versatile model, which can capably address the entire spectrum of data types. To meet these objectives, we first constructed a statistical framework that naturally modeled ChIPseq data structures using a cutting edge advance in HMMs9, which utilizes only explicit formulas-an innovation crucial to its performance advantages. More sophisticated then heuristic models, our HMM accommodates infinite hidden states through a Bayesian model. We applied it to identifying reasonable change points in read density, which further define segments of enrichment. Our analysis revealed how our Bayesian Change Point (BCP) algorithm had a reduced computational complexity-evidenced by an abridged run time and memory footprint. The BCP algorithm was successfully applied to both punctate peak and diffuse island identification with robust accuracy and limited user-defined parameters. This illustrated both its versatility and ease of use. Consequently, we believe it can be implemented readily across broad ranges of data types and end users in a manner that is easily compared and contrasted, making it a great tool for ChIPseq data analysis that can aid in collaboration and corroboration between research groups. Here, we demonstrate the application of BCP to existing transcription factor10,11 and epigenetic data12 to illustrate its usefulness.
Genetics, Issue 70, Bioinformatics, Genomics, Molecular Biology, Cellular Biology, Immunology, Chromatin immunoprecipitation, ChIP-Seq, histone modifications, segmentation, Bayesian, Hidden Markov Models, epigenetics
4273
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
An Organotypic High Throughput System for Characterization of Drug Sensitivity of Primary Multiple Myeloma Cells
Authors: Ariosto Silva, Timothy Jacobson, Mark Meads, Allison Distler, Kenneth Shain.
Institutions: H. Lee Moffitt Cancer Center and Research Institute.
In this work we describe a novel approach that combines ex vivo drug sensitivity assays and digital image analysis to estimate chemosensitivity and heterogeneity of patient-derived multiple myeloma (MM) cells. This approach consists in seeding primary MM cells freshly extracted from bone marrow aspirates into microfluidic chambers implemented in multi-well plates, each consisting of a reconstruction of the bone marrow microenvironment, including extracellular matrix (collagen or basement membrane matrix) and stroma (patient-derived mesenchymal stem cells) or human-derived endothelial cells (HUVECs). The chambers are drugged with different agents and concentrations, and are imaged sequentially for 96 hr through bright field microscopy, in a motorized microscope equipped with a digital camera. Digital image analysis software detects live and dead cells from presence or absence of membrane motion, and generates curves of change in viability as a function of drug concentration and exposure time. We use a computational model to determine the parameters of chemosensitivity of the tumor population to each drug, as well as the number of sub-populations present as a measure of tumor heterogeneity. These patient-tailored models can then be used to simulate therapeutic regimens and estimate clinical response.
Medicine, Issue 101, Multiple myeloma, drug sensitivity, evolution of drug resistance, computational modeling, decision support system, personalized medicine
53070
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.