The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8. Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11. These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
26 Related JoVE Articles!
Generation of Comprehensive Thoracic Oncology Database - Tool for Translational Research
Institutions: University of Chicago, University of Chicago, Northshore University Health Systems, University of Chicago, University of Chicago, University of Chicago.
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
Medicine, Issue 47, Database, Thoracic oncology, Bioinformatics, Biorepository, Microsoft Access, Proteomics, Genomics
EEG Mu Rhythm in Typical and Atypical Development
Institutions: University of Washington, University of Washington.
Electroencephalography (EEG) is an effective, efficient, and noninvasive method of assessing and recording brain activity. Given the excellent temporal resolution, EEG can be used to examine the neural response related to specific behaviors, states, or external stimuli. An example of this utility is the assessment of the mirror neuron system (MNS) in humans through the examination of the EEG mu rhythm. The EEG mu rhythm, oscillatory activity in the 8-12 Hz frequency range recorded from centrally located electrodes, is suppressed when an individual executes, or simply observes, goal directed actions. As such, it has been proposed to reflect activity of the MNS. It has been theorized that dysfunction in the mirror neuron system (MNS) plays a contributing role in the social deficits of autism spectrum disorder (ASD). The MNS can then be noninvasively examined in clinical populations by using EEG mu rhythm attenuation as an index for its activity. The described protocol provides an avenue to examine social cognitive functions theoretically linked to the MNS in individuals with typical and atypical development, such as ASD.
Medicine, Issue 86, Electroencephalography (EEG), mu rhythm, imitation, autism spectrum disorder, social cognition, mirror neuron system
Hydrogel Nanoparticle Harvesting of Plasma or Urine for Detecting Low Abundance Proteins
Institutions: George Mason University, Ceres Nanosciences.
Novel biomarker discovery plays a crucial role in providing more sensitive and specific disease detection. Unfortunately many low-abundance biomarkers that exist in biological fluids cannot be easily detected with mass spectrometry or immunoassays because they are present in very low concentration, are labile, and are often masked by high-abundance proteins such as albumin or immunoglobulin. Bait containing poly(N-isopropylacrylamide) (NIPAm) based nanoparticles are able to overcome these physiological barriers. In one step they are able to capture, concentrate and preserve biomarkers from body fluids. Low-molecular weight analytes enter the core of the nanoparticle and are captured by different organic chemical dyes, which act as high affinity protein baits. The nanoparticles are able to concentrate the proteins of interest by several orders of magnitude. This concentration factor is sufficient to increase the protein level such that the proteins are within the detection limit of current mass spectrometers, western blotting, and immunoassays. Nanoparticles can be incubated with a plethora of biological fluids and they are able to greatly enrich the concentration of low-molecular weight proteins and peptides while excluding albumin and other high-molecular weight proteins. Our data show that a 10,000 fold amplification in the concentration of a particular analyte can be achieved, enabling mass spectrometry and immunoassays to detect previously undetectable biomarkers.
Bioengineering, Issue 90, biomarker, hydrogel, low abundance, mass spectrometry, nanoparticle, plasma, protein, urine
A Sensitive and Specific Quantitation Method for Determination of Serum Cardiac Myosin Binding Protein-C by Electrochemiluminescence Immunoassay
Institutions: Loyola University Chicago.
Biomarkers are becoming increasingly more important in clinical decision-making, as well as basic science. Diagnosing myocardial infarction (MI) is largely driven by detecting cardiac-specific proteins in patients' serum or plasma as an indicator of myocardial injury. Having recently shown that cardiac myosin binding protein-C (cMyBP-C) is detectable in the serum after MI, we have proposed it as a potential biomarker for MI. Biomarkers are typically detected by traditional sandwich enzyme-linked immunosorbent assays. However, this technique requires a large sample volume, has a small dynamic range, and can measure only one protein at a time.
Here we show a multiplex immunoassay in which three cardiac proteins can be measured simultaneously with high sensitivity. Measuring cMyBP-C in uniplex or together with creatine kinase MB and cardiac troponin I showed comparable sensitivity. This technique uses the Meso Scale Discovery (MSD) method of multiplexing in a 96-well plate combined with electrochemiluminescence for detection. While only small sample volumes are required, high sensitivity and a large dynamic range are achieved. Using this technique, we measured cMyBP-C, creatine kinase MB, and cardiac troponin I levels in serum samples from 16 subjects with MI and compared the results with 16 control subjects. We were able to detect all three markers in these samples and found all three biomarkers to be increased after MI. This technique is, therefore, suitable for the sensitive detection of cardiac biomarkers in serum samples.
Molecular Biology, Issue 78, Cellular Biology, Biochemistry, Genetics, Biomedical Engineering, Medicine, Cardiology, Heart Diseases, Myocardial Ischemia, Myocardial Infarction, Cardiovascular Diseases, cardiovascular disease, immunoassay, cardiac myosin binding protein-C, cardiac troponin I, creatine kinase MB, electrochemiluminescence, multiplex biomarkers, ELISA, assay
Extracellularly Identifying Motor Neurons for a Muscle Motor Pool in Aplysia californica
Institutions: Case Western Reserve University , Case Western Reserve University , Case Western Reserve University .
In animals with large identified neurons (e.g.
mollusks), analysis of motor pools is done using intracellular techniques1,2,3,4
. Recently, we developed a technique to extracellularly stimulate and record individual neurons in Aplysia californica5
. We now describe a protocol for using this technique to uniquely identify and characterize motor neurons within a motor pool.
This extracellular technique has advantages. First, extracellular electrodes can stimulate and record neurons through the sheath5
, so it does not need to be removed. Thus, neurons will be healthier in extracellular experiments than in intracellular ones. Second, if ganglia are rotated by appropriate pinning of the sheath, extracellular electrodes can access neurons on both sides of the ganglion, which makes it easier and more efficient to identify multiple neurons in the same preparation. Third, extracellular electrodes do not need to penetrate cells, and thus can be easily moved back and forth among neurons, causing less damage to them. This is especially useful when one tries to record multiple neurons during repeating motor patterns that may only persist for minutes. Fourth, extracellular electrodes are more flexible than intracellular ones during muscle movements. Intracellular electrodes may pull out and damage neurons during muscle contractions. In contrast, since extracellular electrodes are gently pressed onto the sheath above neurons, they usually stay above the same neuron during muscle contractions, and thus can be used in more intact preparations.
To uniquely identify motor neurons for a motor pool (in particular, the I1/I3 muscle in Aplysia
) using extracellular electrodes, one can use features that do not require intracellular measurements as criteria: soma size and location, axonal projection, and muscle innervation4,6,7
. For the particular motor pool used to illustrate the technique, we recorded from buccal nerves 2 and 3 to measure axonal projections, and measured the contraction forces of the I1/I3 muscle to determine the pattern of muscle innervation for the individual motor neurons.
We demonstrate the complete process of first identifying motor neurons using muscle innervation, then characterizing their timing during motor patterns, creating a simplified diagnostic method for rapid identification. The simplified and more rapid diagnostic method is superior for more intact preparations, e.g.
in the suspended buccal mass preparation8
or in vivo9
. This process can also be applied in other motor pools10,11,12
or in other animal systems2,3,13,14
Neuroscience, Issue 73, Physiology, Biomedical Engineering, Anatomy, Behavior, Neurobiology, Animal, Neurosciences, Neurophysiology, Electrophysiology, Aplysia, Aplysia californica, California sea slug, invertebrate, feeding, buccal mass, ganglia, motor neurons, neurons, extracellular stimulation and recordings, extracellular electrodes, animal model
Dried Blood Spot Collection of Health Biomarkers to Maximize Participation in Population Studies
Institutions: Harvard School of Public Health, Brigham and Women's Hospital, Harvard Medical School, Pennsylvania State University.
Biomarkers are directly-measured biological indicators of disease, health, exposures, or other biological information. In population and social sciences, biomarkers need to be easy to obtain, transport, and analyze. Dried Blood Spots meet this need, and can be collected in the field with high response rates. These elements are particularly important in longitudinal study designs including interventions where attrition is critical to avoid, and high response rates improve the interpretation of results. Dried Blood Spot sample collection is simple, quick, relatively painless, less invasive then venipuncture, and requires minimal field storage requirements (i.e.
samples do not need to be immediately frozen and can be stored for a long period of time in a stable freezer environment before assay). The samples can be analyzed for a variety of different analytes, including cholesterol, C-reactive protein, glycosylated hemoglobin, numerous cytokines, and other analytes, as well as provide genetic material. DBS collection is depicted as employed in several recent studies.
Medicine, Issue 83, dried blood spots (DBS), Biomarkers, cardiometabolic risk, Inflammation, standard precautions, blood collection
Growing Neural Stem Cells from Conventional and Nonconventional Regions of the Adult Rodent Brain
Institutions: University of Dresden, Center for Regerative Therapies Dresden.
Recent work demonstrates that central nervous system (CNS) regeneration and tumorigenesis involves populations of stem cells (SCs) resident within the adult brain. However, the mechanisms these normally quiescent cells employ to ensure proper functioning of neural networks, as well as their role in recovery from injury and mitigation of neurodegenerative processes are little understood. These cells reside in regions referred to as "niches" that provide a sustaining environment involving modulatory signals from both the vascular and immune systems. The isolation, maintenance, and differentiation of CNS SCs under defined culture conditions which exclude unknown factors, makes them accessible to treatment by pharmacological or genetic means, thus providing insight into their in vivo
behavior. Here we offer detailed information on the methods for generating cultures of CNS SCs from distinct regions of the adult brain and approaches to assess their differentiation potential into neurons, astrocytes, and oligodendrocytes in vitro
. This technique yields a homogeneous cell population as a monolayer culture that can be visualized to study individual SCs and their progeny. Furthermore, it can be applied across different animal model systems and clinical samples, being used previously to predict regenerative responses in the damaged adult nervous system.
Neuroscience, Issue 81, adult neural stem cells, proliferation, differentiation, cell culture, growth factors
A Next-generation Tissue Microarray (ngTMA) Protocol for Biomarker Studies
Institutions: University of Bern.
Biomarker research relies on tissue microarrays (TMA). TMAs are produced by repeated transfer of small tissue cores from a ‘donor’ block into a ‘recipient’ block and then used for a variety of biomarker applications. The construction of conventional TMAs is labor intensive, imprecise, and time-consuming. Here, a protocol using next-generation Tissue Microarrays (ngTMA) is outlined. ngTMA is based on TMA planning and design, digital pathology, and automated tissue microarraying. The protocol is illustrated using an example of 134 metastatic colorectal cancer patients. Histological, statistical and logistical aspects are considered, such as the tissue type, specific histological regions, and cell types for inclusion in the TMA, the number of tissue spots, sample size, statistical analysis, and number of TMA copies. Histological slides for each patient are scanned and uploaded onto a web-based digital platform. There, they are viewed and annotated (marked) using a 0.6-2.0 mm diameter tool, multiple times using various colors to distinguish tissue areas. Donor blocks and 12 ‘recipient’ blocks are loaded into the instrument. Digital slides are retrieved and matched to donor block images. Repeated arraying of annotated regions is automatically performed resulting in an ngTMA. In this example, six ngTMAs are planned containing six different tissue types/histological zones. Two copies of the ngTMAs are desired. Three to four slides for each patient are scanned; 3 scan runs are necessary and performed overnight. All slides are annotated; different colors are used to represent the different tissues/zones, namely tumor center, invasion front, tumor/stroma, lymph node metastases, liver metastases, and normal tissue. 17 annotations/case are made; time for annotation is 2-3 min/case. 12 ngTMAs are produced containing 4,556 spots. Arraying time is 15-20 hr. Due to its precision, flexibility and speed, ngTMA is a powerful tool to further improve the quality of TMAs used in clinical and translational research.
Medicine, Issue 91, tissue microarray, biomarkers, prognostic, predictive, digital pathology, slide scanning
Purification and microRNA Profiling of Exosomes Derived from Blood and Culture Media
Institutions: Drexel University College of Medicine.
Stable miRNAs are present in all body fluids and some circulating miRNAs are protected from degradation by sequestration in small vesicles called exosomes. Exosomes can fuse with the plasma membrane resulting in the transfer of RNA and proteins to the target cell. Their biological functions include immune response, antigen presentation, and intracellular communication. Delivery of miRNAs that can regulate gene expression in the recipient cells via blood has opened novel avenues for target intervention. In addition to offering a strategy for delivery of drugs or RNA therapeutic agents, exosomal contents can serve as biomarkers that can aid in diagnosis, determining treatment options and prognosis. Here we will describe the procedure for quantitatively analyzing miRNAs and messenger RNAs (mRNA) from exosomes secreted in blood and cell culture media. Purified exosomes will be characterized using western blot analysis for exosomal markers and PCR for mRNAs of interest. Transmission electron microscopy (TEM) and immunogold labeling will be used to validate exosomal morphology and integrity. Total RNA will be purified from these exosomes to ensure that we can study both mRNA and miRNA from the same sample. After validating RNA integrity by Bioanalyzer, we will perform a medium throughput quantitative real time PCR (qPCR) to identify the exosomal miRNA using Taqman Low Density Array (TLDA) cards and gene expression studies for transcripts of interest.
These protocols can be used to quantify changes in exosomal miRNAs in patients, rodent models and cell culture media before and after pharmacological intervention. Exosomal contents vary due to the source of origin and the physiological conditions of cells that secrete exosomes. These variations can provide insight on how cells and systems cope with stress or physiological perturbations. Our representative data show variations in miRNAs present in exosomes purified from mouse blood, human blood and human cell culture media.
Here we will describe the procedure for quantitatively analyzing miRNAs and messenger RNAs (mRNA) from exosomes secreted in blood and cell culture media. Purified exosomes will be characterized using western blot analysis for exosomal markers and PCR for mRNAs of interest. Transmission electron microscopy (TEM) and immunogold labeling will be used to validate exosomal morphology and integrity. Total RNA will be purified from these exosomes to ensure that we can study both mRNA and miRNA from the same sample. After validating RNA integrity by Bioanalyzer, we will perform a medium throughput quantitative real time PCR (qPCR) to identify the exosomal miRNA using Taqman Low Density Array (TLDA) cards and gene expression studies for transcripts of interest.
These protocols can be used to quantify changes in exosomal miRNAs in patients, rodent models and cell culture media before and after pharmacological intervention. Exosomal contents vary due to the source of origin and the physiological conditions of cells that secrete exosomes. These variations can provide insight on how cells and systems cope with stress or physiological perturbations. Our representative data show variations in miRNAs present in exosomes purified from mouse blood, human blood and human cell culture media
Genetics, Issue 76, Molecular Biology, Cellular Biology, Medicine, Biochemistry, Genomics, Pharmacology, Exosomes, RNA, MicroRNAs, Biomarkers, Pharmacological, Exosomes, microRNA, qPCR, PCR, blood, biomarker, TLDA, profiling, sequencing, cell culture
Rapid Analysis and Exploration of Fluorescence Microscopy Images
Institutions: UT Southwestern Medical Center, UT Southwestern Medical Center, Princeton University.
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.
Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
Basic Protocol, Issue 85, PhenoRipper, fluorescence microscopy, image analysis, High-content analysis, high-throughput screening, Open-source, Phenotype
Method for Simultaneous fMRI/EEG Data Collection during a Focused Attention Suggestion for Differential Thermal Sensation
Institutions: University of California, Los Angeles, University of California, Los Angeles, Yale School of Medicine, Korean Basic Science Institute.
In the present work, we demonstrate a method for concurrent collection of EEG/fMRI data. In our setup, EEG data are collected using a high-density 256-channel sensor net. The EEG amplifier itself is contained in a field isolation containment system (FICS), and MRI clock signals are synchronized with EEG data collection for subsequent MR artifact characterization and removal. We demonstrate this method first for resting state data collection. Thereafter, we demonstrate a protocol for EEG/fMRI data recording, while subjects listen to a tape asking them to visualize that their left hand is immersed in a cold-water bath and referred to, here, as the cold glove paradigm. Thermal differentials between each hand are measured throughout EEG/fMRI data collection using an MR compatible temperature sensor that we developed for this purpose. We collect cold glove EEG/fMRI data along with simultaneous differential hand temperature measurements both before and after hypnotic induction. Between pre and post sessions, single modality EEG data are collected during the hypnotic induction and depth assessment process. Our representative results demonstrate that significant changes in the EEG power spectrum can be measured during hypnotic induction, and that hand temperature changes during the cold glove paradigm can be detected rapidly using our MR compatible differential thermometry device.
Behavior, Issue 83, hypnosis, EEG, fMRI, MRI, cold glove, MRI compatible, temperature sensor
Chemically-blocked Antibody Microarray for Multiplexed High-throughput Profiling of Specific Protein Glycosylation in Complex Samples
Institutions: Institute for Hepatitis and Virus Research, Thomas Jefferson University , Drexel University College of Medicine, Van Andel Research Institute, Serome Biosciences Inc..
In this study, we describe an effective protocol for use in a multiplexed high-throughput antibody microarray with glycan binding protein detection that allows for the glycosylation profiling of specific proteins. Glycosylation of proteins is the most prevalent post-translational modification found on proteins, and leads diversified modifications of the physical, chemical, and biological properties of proteins. Because the glycosylation machinery is particularly susceptible to disease progression and malignant transformation, aberrant glycosylation has been recognized as early detection biomarkers for cancer and other diseases. However, current methods to study protein glycosylation typically are too complicated or expensive for use in most normal laboratory or clinical settings and a more practical method to study protein glycosylation is needed. The new protocol described in this study makes use of a chemically blocked antibody microarray with glycan-binding protein (GBP) detection and significantly reduces the time, cost, and lab equipment requirements needed to study protein glycosylation. In this method, multiple immobilized glycoprotein-specific antibodies are printed directly onto the microarray slides and the N-glycans on the antibodies are blocked. The blocked, immobilized glycoprotein-specific antibodies are able to capture and isolate glycoproteins from a complex sample that is applied directly onto the microarray slides. Glycan detection then can be performed by the application of biotinylated lectins and other GBPs to the microarray slide, while binding levels can be determined using Dylight 549-Streptavidin. Through the use of an antibody panel and probing with multiple biotinylated lectins, this method allows for an effective glycosylation profile of the different proteins found in a given human or animal sample to be developed.
Glycosylation of protein, which is the most ubiquitous post-translational modification on proteins, modifies the physical, chemical, and biological properties of a protein, and plays a fundamental role in various biological processes1-6
. Because the glycosylation machinery is particularly susceptible to disease progression and malignant transformation, aberrant glycosylation has been recognized as early detection biomarkers for cancer and other diseases 7-12
. In fact, most current cancer biomarkers, such as the L3 fraction of α-1 fetoprotein (AFP) for hepatocellular carcinoma 13-15
, and CA199 for pancreatic cancer 16, 17
are all aberrant glycan moieties on glycoproteins. However, methods to study protein glycosylation have been complicated, and not suitable for routine laboratory and clinical settings. Chen et al.
has recently invented a chemically blocked antibody microarray with a glycan-binding protein (GBP) detection method for high-throughput and multiplexed profile glycosylation of native glycoproteins in a complex sample 18
. In this affinity based microarray method, multiple immobilized glycoprotein-specific antibodies capture and isolate glycoproteins from the complex mixture directly on the microarray slide, and the glycans on each individual captured protein are measured by GBPs. Because all normal antibodies contain N-glycans which could be recognized by most GBPs, the critical step of this method is to chemically block the glycans on the antibodies from binding to GBP. In the procedure, the cis
-diol groups of the glycans on the antibodies were first oxidized to aldehyde groups by using NaIO4
in sodium acetate buffer avoiding light. The aldehyde groups were then conjugated to the hydrazide group of a cross-linker, 4-(4-N-MaleimidoPhenyl)butyric acid Hydrazide HCl (MPBH), followed by the conjugation of a dipeptide, Cys-Gly, to the maleimide group of the MPBH. Thus, the cis-diol groups on glycans of antibodies were converted into bulky none hydroxyl groups, which hindered the lectins and other GBPs bindings to the capture antibodies. This blocking procedure makes the GBPs and lectins bind only to the glycans of captured proteins. After this chemically blocking, serum samples were incubated with the antibody microarray, followed by the glycans detection by using different biotinylated lectins and GBPs, and visualized with Cy3-streptavidin. The parallel use of an antibody panel and multiple lectin probing provides discrete glycosylation profiles of multiple proteins in a given sample 18-20
. This method has been used successfully in multiple different labs 1, 7, 13, 19-31
. However, stability of MPBH and Cys-Gly, complicated and extended procedure in this method affect the reproducibility, effectiveness and efficiency of the method. In this new protocol, we replaced both MPBH and Cys-Gly with one much more stable reagent glutamic acid hydrazide (Glu-hydrazide), which significantly improved the reproducibility of the method, simplified and shorten the whole procedure so that the it can be completed within one working day. In this new protocol, we describe the detailed procedure of the protocol which can be readily adopted by normal labs for routine protein glycosylation study and techniques which are necessary to obtain reproducible and repeatable results.
Molecular Biology, Issue 63, Glycoproteins, glycan-binding protein, specific protein glycosylation, multiplexed high-throughput glycan blocked antibody microarray
Best Current Practice for Obtaining High Quality EEG Data During Simultaneous fMRI
Institutions: University of Nottingham , Brain Products GmbH.
Simultaneous EEG-fMRI allows the excellent temporal resolution of EEG to be combined with the high spatial accuracy of fMRI. The data from these two modalities can be combined in a number of ways, but all rely on the acquisition of high quality EEG and fMRI data. EEG data acquired during simultaneous fMRI are affected by several artifacts, including the gradient artefact (due to the changing magnetic field gradients required for fMRI), the pulse artefact (linked to the cardiac cycle) and movement artifacts (resulting from movements in the strong magnetic field of the scanner, and muscle activity). Post-processing methods for successfully correcting the gradient and pulse artifacts require a number of criteria to be satisfied during data acquisition. Minimizing head motion during EEG-fMRI is also imperative for limiting the generation of artifacts.
Interactions between the radio frequency (RF) pulses required for MRI and the EEG hardware may occur and can cause heating. This is only a significant risk if safety guidelines are not satisfied. Hardware design and set-up, as well as careful selection of which MR sequences are run with the EEG hardware present must therefore be considered.
The above issues highlight the importance of the choice of the experimental protocol employed when performing a simultaneous EEG-fMRI experiment. Based on previous research we describe an optimal experimental set-up. This provides high quality EEG data during simultaneous fMRI when using commercial EEG and fMRI systems, with safety risks to the subject minimized. We demonstrate this set-up in an EEG-fMRI experiment using a simple visual stimulus. However, much more complex stimuli can be used. Here we show the EEG-fMRI set-up using a Brain Products GmbH (Gilching, Germany) MRplus, 32 channel EEG system in conjunction with a Philips Achieva (Best, Netherlands) 3T MR scanner, although many of the techniques are transferable to other systems.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Biophysics, Medicine, Neuroimaging, Functional Neuroimaging, Investigative Techniques, neurosciences, EEG, functional magnetic resonance imaging, fMRI, magnetic resonance imaging, MRI, simultaneous, recording, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Microarray-based Identification of Individual HERV Loci Expression: Application to Biomarker Discovery in Prostate Cancer
Institutions: Joint Unit Hospices de Lyon-bioMérieux, BioMérieux, Hospices Civils de Lyon, Lyon 1 University, BioMérieux, Hospices Civils de Lyon, Hospices Civils de Lyon.
The prostate-specific antigen (PSA) is the main diagnostic biomarker for prostate cancer in clinical use, but it lacks specificity and sensitivity, particularly in low dosage values1
. ‘How to use PSA' remains a current issue, either for diagnosis as a gray zone corresponding to a concentration in serum of 2.5-10 ng/ml which does not allow a clear differentiation to be made between cancer and noncancer2
or for patient follow-up as analysis of post-operative PSA kinetic parameters can pose considerable challenges for their practical application3,4
. Alternatively, noncoding RNAs (ncRNAs) are emerging as key molecules in human cancer, with the potential to serve as novel markers of disease, e.g.
PCA3 in prostate cancer5,6
and to reveal uncharacterized aspects of tumor biology. Moreover, data from the ENCODE project published in 2012 showed that different RNA types cover about 62% of the genome. It also appears that the amount of transcriptional regulatory motifs is at least 4.5x higher than the one corresponding to protein-coding exons. Thus, long terminal repeats (LTRs) of human endogenous retroviruses (HERVs) constitute a wide range of putative/candidate transcriptional regulatory sequences, as it is their primary function in infectious retroviruses. HERVs, which are spread throughout the human genome, originate from ancestral and independent infections within the germ line, followed by copy-paste propagation processes and leading to multicopy families occupying 8% of the human genome (note that exons span 2% of our genome). Some HERV loci still express proteins that have been associated with several pathologies including cancer7-10
. We have designed a high-density microarray, in Affymetrix format, aiming to optimally characterize individual HERV loci expression, in order to better understand whether they can be active, if they drive ncRNA transcription or modulate coding gene expression. This tool has been applied in the prostate cancer field (Figure 1
Medicine, Issue 81, Cancer Biology, Genetics, Molecular Biology, Prostate, Retroviridae, Biomarkers, Pharmacological, Tumor Markers, Biological, Prostatectomy, Microarray Analysis, Gene Expression, Diagnosis, Human Endogenous Retroviruses, HERV, microarray, Transcriptome, prostate cancer, Affymetrix
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD).
Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g.
, working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions.
Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Pyrosequencing: A Simple Method for Accurate Genotyping
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Functional Mapping with Simultaneous MEG and EEG
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo
mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo
mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo
mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1
and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo
mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2
. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo
mutations. This is the case for autism and schizophrenia3
. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo
mutations would more frequently come from males, particularly older males4
. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo
mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo
mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing