JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Multi-TGDR: A Regularization Method for Multi-Class Classification in Microarray Experiments.
PLoS ONE
PUBLISHED: 01-01-2013
As microarray technology has become mature and popular, the selection and use of a small number of relevant genes for accurate classification of samples has arisen as a hot topic in the circles of biostatistics and bioinformatics. However, most of the developed algorithms lack the ability to handle multiple classes, arguably a common application. Here, we propose an extension to an existing regularization algorithm, called Threshold Gradient Descent Regularization (TGDR), to specifically tackle multi-class classification of microarray data. When there are several microarray experiments addressing the same/similar objectives, one option is to use a meta-analysis version of TGDR (Meta-TGDR), which considers the classification task as a combination of classifiers with the same structure/model while allowing the parameters to vary across studies. However, the original Meta-TGDR extension did not offer a solution to the prediction on independent samples. Here, we propose an explicit method to estimate the overall coefficients of the biomarkers selected by Meta-TGDR. This extension permits broader applicability and allows a comparison between the predictive performance of Meta-TGDR and TGDR using an independent testing set.
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Published: 11-03-2011
ABSTRACT
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
23 Related JoVE Articles!
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
52111
Play Button
Performing Custom MicroRNA Microarray Experiments
Authors: Xiaoxiao Zhang, Yan Zeng.
Institutions: University of Minnesota , University of Minnesota .
microRNAs (miRNAs) are a large family of ˜ 22 nucleotides (nt) long RNA molecules that are widely expressed in eukaryotes 1. Complex genomes encode at least hundreds of miRNAs, which primarily inhibit the expression of a vast number of target genes post-transcriptionally 2, 3. miRNAs control a broad range of biological processes 1. In addition, altered miRNA expression has been associated with human diseases such as cancers, and miRNAs may serve as biomarkers for diseases and prognosis 4, 5. It is important, therefore, to understand the expression and functions of miRNAs under many different conditions. Three major approaches have been employed to profile miRNA expression: real-time PCR, microarray, and deep sequencing. The technique of miRNA microarray has the advantage of being high-throughput, generally less expensive, and most of the experimental and analysis steps can be carried out in a molecular biology laboratory at most universities, medical schools and associated hospitals. Here, we describe a method for performing custom miRNA microarray experiments. A miRNA probe set will be printed on glass slides to produce miRNA microarrays. RNA is isolated using a method or reagent that preserves small RNA species, and then labeled with a fluorescence dye. As a control, reference DNA oligonucleotides corresponding to a subset of miRNAs are also labeled with a different fluorescence dye. The reference DNA will serve to demonstrate the quality of the slide and hybridization and will also be used for data normalization. The RNA and DNA are mixed and hybridized to a microarray slide containing probes for most of the miRNAs in the database. After washing, the slide is scanned to obtain images, and intensities of the individual spots quantified. These raw signals will be further processed and analyzed as the expression data of the corresponding miRNAs. Microarray slides can be stripped and regenerated to reduce the cost of microarrays and to enhance the consistency of microarray experiments. The same principles and procedures are applicable to other types of custom microarray experiments.
Molecular Biology, Issue 56, Genetics, microRNA, custom microarray, oligonucleotide probes, RNA labeling
3250
Play Button
Demonstrating a Multi-drug Resistant Mycobacterium tuberculosis Amplification Microarray
Authors: Yvonne Linger, Alexander Kukhtin, Julia Golova, Alexander Perov, Peter Qu, Christopher Knickerbocker, Christopher G. Cooney, Darrell P. Chandler.
Institutions: Akonni Biosystems, Inc..
Simplifying microarray workflow is a necessary first step for creating MDR-TB microarray-based diagnostics that can be routinely used in lower-resource environments. An amplification microarray combines asymmetric PCR amplification, target size selection, target labeling, and microarray hybridization within a single solution and into a single microfluidic chamber. A batch processing method is demonstrated with a 9-plex asymmetric master mix and low-density gel element microarray for genotyping multi-drug resistant Mycobacterium tuberculosis (MDR-TB). The protocol described here can be completed in 6 hr and provide correct genotyping with at least 1,000 cell equivalents of genomic DNA. Incorporating on-chip wash steps is feasible, which will result in an entirely closed amplicon method and system. The extent of multiplexing with an amplification microarray is ultimately constrained by the number of primer pairs that can be combined into a single master mix and still achieve desired sensitivity and specificity performance metrics, rather than the number of probes that are immobilized on the array. Likewise, the total analysis time can be shortened or lengthened depending on the specific intended use, research question, and desired limits of detection. Nevertheless, the general approach significantly streamlines microarray workflow for the end user by reducing the number of manually intensive and time-consuming processing steps, and provides a simplified biochemical and microfluidic path for translating microarray-based diagnostics into routine clinical practice.
Immunology, Issue 86, MDR-TB, gel element microarray, closed amplicon, drug resistance, rifampin, isoniazid, streptomycin, ethambutol
51256
Play Button
Simultaneous Long-term Recordings at Two Neuronal Processing Stages in Behaving Honeybees
Authors: Martin Fritz Brill, Maren Reuter, Wolfgang Rössler, Martin Fritz Strube-Bloss.
Institutions: University of Würzburg.
In both mammals and insects neuronal information is processed in different higher and lower order brain centers. These centers are coupled via convergent and divergent anatomical connections including feed forward and feedback wiring. Furthermore, information of the same origin is partially sent via parallel pathways to different and sometimes into the same brain areas. To understand the evolutionary benefits as well as the computational advantages of these wiring strategies and especially their temporal dependencies on each other, it is necessary to have simultaneous access to single neurons of different tracts or neuropiles in the same preparation at high temporal resolution. Here we concentrate on honeybees by demonstrating a unique extracellular long term access to record multi unit activity at two subsequent neuropiles1, the antennal lobe (AL), the first olfactory processing stage and the mushroom body (MB), a higher order integration center involved in learning and memory formation, or two parallel neuronal tracts2 connecting the AL with the MB. The latter was chosen as an example and will be described in full. In the supporting video the construction and permanent insertion of flexible multi channel wire electrodes is demonstrated. Pairwise differential amplification of the micro wire electrode channels drastically reduces the noise and verifies that the source of the signal is closely related to the position of the electrode tip. The mechanical flexibility of the used wire electrodes allows stable invasive long term recordings over many hours up to days, which is a clear advantage compared to conventional extra and intracellular in vivo recording techniques.
Neuroscience, Issue 89, honeybee brain, olfaction, extracellular long term recordings, double recordings, differential wire electrodes, single unit, multi-unit recordings
51750
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
3064
Play Button
Chemically-blocked Antibody Microarray for Multiplexed High-throughput Profiling of Specific Protein Glycosylation in Complex Samples
Authors: Chen Lu, Joshua L. Wonsidler, Jianwei Li, Yanming Du, Timothy Block, Brian Haab, Songming Chen.
Institutions: Institute for Hepatitis and Virus Research, Thomas Jefferson University , Drexel University College of Medicine, Van Andel Research Institute, Serome Biosciences Inc..
In this study, we describe an effective protocol for use in a multiplexed high-throughput antibody microarray with glycan binding protein detection that allows for the glycosylation profiling of specific proteins. Glycosylation of proteins is the most prevalent post-translational modification found on proteins, and leads diversified modifications of the physical, chemical, and biological properties of proteins. Because the glycosylation machinery is particularly susceptible to disease progression and malignant transformation, aberrant glycosylation has been recognized as early detection biomarkers for cancer and other diseases. However, current methods to study protein glycosylation typically are too complicated or expensive for use in most normal laboratory or clinical settings and a more practical method to study protein glycosylation is needed. The new protocol described in this study makes use of a chemically blocked antibody microarray with glycan-binding protein (GBP) detection and significantly reduces the time, cost, and lab equipment requirements needed to study protein glycosylation. In this method, multiple immobilized glycoprotein-specific antibodies are printed directly onto the microarray slides and the N-glycans on the antibodies are blocked. The blocked, immobilized glycoprotein-specific antibodies are able to capture and isolate glycoproteins from a complex sample that is applied directly onto the microarray slides. Glycan detection then can be performed by the application of biotinylated lectins and other GBPs to the microarray slide, while binding levels can be determined using Dylight 549-Streptavidin. Through the use of an antibody panel and probing with multiple biotinylated lectins, this method allows for an effective glycosylation profile of the different proteins found in a given human or animal sample to be developed. Introduction Glycosylation of protein, which is the most ubiquitous post-translational modification on proteins, modifies the physical, chemical, and biological properties of a protein, and plays a fundamental role in various biological processes1-6. Because the glycosylation machinery is particularly susceptible to disease progression and malignant transformation, aberrant glycosylation has been recognized as early detection biomarkers for cancer and other diseases 7-12. In fact, most current cancer biomarkers, such as the L3 fraction of α-1 fetoprotein (AFP) for hepatocellular carcinoma 13-15, and CA199 for pancreatic cancer 16, 17 are all aberrant glycan moieties on glycoproteins. However, methods to study protein glycosylation have been complicated, and not suitable for routine laboratory and clinical settings. Chen et al. has recently invented a chemically blocked antibody microarray with a glycan-binding protein (GBP) detection method for high-throughput and multiplexed profile glycosylation of native glycoproteins in a complex sample 18. In this affinity based microarray method, multiple immobilized glycoprotein-specific antibodies capture and isolate glycoproteins from the complex mixture directly on the microarray slide, and the glycans on each individual captured protein are measured by GBPs. Because all normal antibodies contain N-glycans which could be recognized by most GBPs, the critical step of this method is to chemically block the glycans on the antibodies from binding to GBP. In the procedure, the cis-diol groups of the glycans on the antibodies were first oxidized to aldehyde groups by using NaIO4 in sodium acetate buffer avoiding light. The aldehyde groups were then conjugated to the hydrazide group of a cross-linker, 4-(4-N-MaleimidoPhenyl)butyric acid Hydrazide HCl (MPBH), followed by the conjugation of a dipeptide, Cys-Gly, to the maleimide group of the MPBH. Thus, the cis-diol groups on glycans of antibodies were converted into bulky none hydroxyl groups, which hindered the lectins and other GBPs bindings to the capture antibodies. This blocking procedure makes the GBPs and lectins bind only to the glycans of captured proteins. After this chemically blocking, serum samples were incubated with the antibody microarray, followed by the glycans detection by using different biotinylated lectins and GBPs, and visualized with Cy3-streptavidin. The parallel use of an antibody panel and multiple lectin probing provides discrete glycosylation profiles of multiple proteins in a given sample 18-20. This method has been used successfully in multiple different labs 1, 7, 13, 19-31. However, stability of MPBH and Cys-Gly, complicated and extended procedure in this method affect the reproducibility, effectiveness and efficiency of the method. In this new protocol, we replaced both MPBH and Cys-Gly with one much more stable reagent glutamic acid hydrazide (Glu-hydrazide), which significantly improved the reproducibility of the method, simplified and shorten the whole procedure so that the it can be completed within one working day. In this new protocol, we describe the detailed procedure of the protocol which can be readily adopted by normal labs for routine protein glycosylation study and techniques which are necessary to obtain reproducible and repeatable results.
Molecular Biology, Issue 63, Glycoproteins, glycan-binding protein, specific protein glycosylation, multiplexed high-throughput glycan blocked antibody microarray
3791
Play Button
A Manual Small Molecule Screen Approaching High-throughput Using Zebrafish Embryos
Authors: Shahram Jevin Poureetezadi, Eric K. Donahue, Rebecca A. Wingert.
Institutions: University of Notre Dame.
Zebrafish have become a widely used model organism to investigate the mechanisms that underlie developmental biology and to study human disease pathology due to their considerable degree of genetic conservation with humans. Chemical genetics entails testing the effect that small molecules have on a biological process and is becoming a popular translational research method to identify therapeutic compounds. Zebrafish are specifically appealing to use for chemical genetics because of their ability to produce large clutches of transparent embryos, which are externally fertilized. Furthermore, zebrafish embryos can be easily drug treated by the simple addition of a compound to the embryo media. Using whole-mount in situ hybridization (WISH), mRNA expression can be clearly visualized within zebrafish embryos. Together, using chemical genetics and WISH, the zebrafish becomes a potent whole organism context in which to determine the cellular and physiological effects of small molecules. Innovative advances have been made in technologies that utilize machine-based screening procedures, however for many labs such options are not accessible or remain cost-prohibitive. The protocol described here explains how to execute a manual high-throughput chemical genetic screen that requires basic resources and can be accomplished by a single individual or small team in an efficient period of time. Thus, this protocol provides a feasible strategy that can be implemented by research groups to perform chemical genetics in zebrafish, which can be useful for gaining fundamental insights into developmental processes, disease mechanisms, and to identify novel compounds and signaling pathways that have medically relevant applications.
Developmental Biology, Issue 93, zebrafish, chemical genetics, chemical screen, in vivo small molecule screen, drug discovery, whole mount in situ hybridization (WISH), high-throughput screening (HTS), high-content screening (HCS)
52063
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Engineering Platform and Experimental Protocol for Design and Evaluation of a Neurally-controlled Powered Transfemoral Prosthesis
Authors: Fan Zhang, Ming Liu, Stephen Harper, Michael Lee, He Huang.
Institutions: North Carolina State University & University of North Carolina at Chapel Hill, University of North Carolina School of Medicine, Atlantic Prosthetics & Orthotics, LLC.
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
Biomedical Engineering, Issue 89, neural control, powered transfemoral prosthesis, electromyography (EMG), neural-machine interface, experimental setup and protocol
51059
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
An Analytical Tool-box for Comprehensive Biochemical, Structural and Transcriptome Evaluation of Oral Biofilms Mediated by Mutans Streptococci
Authors: Marlise I. Klein, Jin Xiao, Arne Heydorn, Hyun Koo.
Institutions: University of Rochester Medical Center, Sichuan University, Glostrup Hospital, Glostrup, Denmark, University of Rochester Medical Center.
Biofilms are highly dynamic, organized and structured communities of microbial cells enmeshed in an extracellular matrix of variable density and composition 1, 2. In general, biofilms develop from initial microbial attachment on a surface followed by formation of cell clusters (or microcolonies) and further development and stabilization of the microcolonies, which occur in a complex extracellular matrix. The majority of biofilm matrices harbor exopolysaccharides (EPS), and dental biofilms are no exception; especially those associated with caries disease, which are mostly mediated by mutans streptococci 3. The EPS are synthesized by microorganisms (S. mutans, a key contributor) by means of extracellular enzymes, such as glucosyltransferases using sucrose primarily as substrate 3. Studies of biofilms formed on tooth surfaces are particularly challenging owing to their constant exposure to environmental challenges associated with complex diet-host-microbial interactions occurring in the oral cavity. Better understanding of the dynamic changes of the structural organization and composition of the matrix, physiology and transcriptome/proteome profile of biofilm-cells in response to these complex interactions would further advance the current knowledge of how oral biofilms modulate pathogenicity. Therefore, we have developed an analytical tool-box to facilitate biofilm analysis at structural, biochemical and molecular levels by combining commonly available and novel techniques with custom-made software for data analysis. Standard analytical (colorimetric assays, RT-qPCR and microarrays) and novel fluorescence techniques (for simultaneous labeling of bacteria and EPS) were integrated with specific software for data analysis to address the complex nature of oral biofilm research. The tool-box is comprised of 4 distinct but interconnected steps (Figure 1): 1) Bioassays, 2) Raw Data Input, 3) Data Processing, and 4) Data Analysis. We used our in vitro biofilm model and specific experimental conditions to demonstrate the usefulness and flexibility of the tool-box. The biofilm model is simple, reproducible and multiple replicates of a single experiment can be done simultaneously 4, 5. Moreover, it allows temporal evaluation, inclusion of various microbial species 5 and assessment of the effects of distinct experimental conditions (e.g. treatments 6; comparison of knockout mutants vs. parental strain 5; carbohydrates availability 7). Here, we describe two specific components of the tool-box, including (i) new software for microarray data mining/organization (MDV) and fluorescence imaging analysis (DUOSTAT), and (ii) in situ EPS-labeling. We also provide an experimental case showing how the tool-box can assist with biofilms analysis, data organization, integration and interpretation.
Microbiology, Issue 47, Extracellular matrix, polysaccharides, biofilm, mutans streptococci, glucosyltransferases, confocal fluorescence, microarray
2512
Play Button
Colorectal Cancer Cell Surface Protein Profiling Using an Antibody Microarray and Fluorescence Multiplexing
Authors: Jerry Zhou, Larissa Belov, Michael J. Solomon, Charles Chan, Stephen J. Clarke, Richard I. Christopherson.
Institutions: University of Sydney, Royal Prince Alfred Hospital, Department of Anatomical Pathology, Concord Repatriation General Hospital.
The current prognosis and classification of CRC relies on staging systems that integrate histopathologic and clinical findings. However, in the majority of CRC cases, cell dysfunction is the result of numerous mutations that modify protein expression and post-translational modification1. A number of cell surface antigens, including cluster of differentiation (CD) antigens, have been identified as potential prognostic or metastatic biomarkers in CRC. These antigens make ideal biomarkers as their expression often changes with tumour progression or interactions with other cell types, such as tumour-infiltrating lymphocytes (TILs) and tumour-associated macrophages (TAMs). The use of immunohistochemistry (IHC) for cancer sub-classification and prognostication is well established for some tumour types2,3. However, no single ‘marker’ has shown prognostic significance greater than clinico-pathological staging or gained wide acceptance for use in routine pathology reporting of all CRC cases. A more recent approach to prognostic stratification of disease phenotypes relies on surface protein profiles using multiple 'markers'. While expression profiling of tumours using proteomic techniques such as iTRAQ is a powerful tool for the discovery of biomarkers4, it is not optimal for routine use in diagnostic laboratories and cannot distinguish different cell types in a mixed population. In addition, large amounts of tumour tissue are required for the profiling of purified plasma membrane glycoproteins by these methods. In this video we described a simple method for surface proteome profiling of viable cells from disaggregated CRC samples using a DotScan CRC antibody microarray. The 122-antibody microarray consists of a standard 82-antibody region recognizing a range of lineage-specific leukocyte markers, adhesion molecules, receptors and markers of inflammation and immune response5, together with a satellite region for detection of 40 potentially prognostic markers for CRC. Cells are captured only on antibodies for which they express the corresponding antigen. The cell density per dot, determined by optical scanning, reflects the proportion of cells expressing that antigen, the level of expression of the antigen and affinity of the antibody6. For CRC tissue or normal intestinal mucosa, optical scans reflect the immunophenotype of mixed populations of cells. Fluorescence multiplexing can then be used to profile selected sub-populations of cells of interest captured on the array. For example, Alexa 647-anti-epithelial cell adhesion molecule (EpCAM; CD326), is a pan-epithelial differentiation antigen that was used to detect CRC cells and also epithelial cells of normal intestinal mucosa, while Phycoerythrin-anti-CD3, was used to detect infiltrating T-cells7. The DotScan CRC microarray should be the prototype for a diagnostic alternative to the anatomically-based CRC staging system.
Immunology, Issue 55, colorectal cancer, leukocytes, antibody microarray, multiplexing, fluorescence, CD antigens
3322
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
A Guided Materials Screening Approach for Developing Quantitative Sol-gel Derived Protein Microarrays
Authors: Blake-Joseph Helka, John D. Brennan.
Institutions: McMaster University .
Microarrays have found use in the development of high-throughput assays for new materials and discovery of small-molecule drug leads. Herein we describe a guided material screening approach to identify sol-gel based materials that are suitable for producing three-dimensional protein microarrays. The approach first identifies materials that can be printed as microarrays, narrows down the number of materials by identifying those that are compatible with a given enzyme assay, and then hones in on optimal materials based on retention of maximum enzyme activity. This approach is applied to develop microarrays suitable for two different enzyme assays, one using acetylcholinesterase and the other using a set of four key kinases involved in cancer. In each case, it was possible to produce microarrays that could be used for quantitative small-molecule screening assays and production of dose-dependent inhibitor response curves. Importantly, the ability to screen many materials produced information on the types of materials that best suited both microarray production and retention of enzyme activity. The materials data provide insight into basic material requirements necessary for tailoring optimal, high-density sol-gel derived microarrays.
Chemistry, Issue 78, Biochemistry, Chemical Engineering, Molecular Biology, Genetics, Bioengineering, Biomedical Engineering, Chemical Biology, Biocompatible Materials, Siloxanes, Enzymes, Immobilized, chemical analysis techniques, chemistry (general), materials (general), spectroscopic analysis (chemistry), polymer matrix composites, testing of materials (composite materials), Sol-gel, microarray, high-throughput screening, acetylcholinesterase, kinase, drug discovery, assay
50689
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
2703
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
3307
Play Button
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Authors: Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward, Kayvan Najarian.
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
3871
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.