The BCL-2 (B cell CLL/Lymphoma) family is comprised of approximately twenty proteins that collaborate to either maintain cell survival or initiate apoptosis1. Following cellular stress (e.g., DNA damage), the pro-apoptotic BCL-2 family effectors BAK (BCL-2 antagonistic killer 1) and/or BAX (BCL-2 associated X protein) become activated and compromise the integrity of the outer mitochondrial membrane (OMM), though the process referred to as mitochondrial outer membrane permeabilization (MOMP)1. After MOMP occurs, pro-apoptotic proteins (e.g., cytochrome c) gain access to the cytoplasm, promote caspase activation, and apoptosis rapidly ensues2.
In order for BAK/BAX to induce MOMP, they require transient interactions with members of another pro-apoptotic subset of the BCL-2 family, the BCL-2 homology domain 3 (BH3)-only proteins, such as BID (BH3-interacting domain agonist)3-6. Anti-apoptotic BCL-2 family proteins (e.g., BCL-2 related gene, long isoform, BCL-xL; myeloid cell leukemia 1, MCL-1) regulate cellular survival by tightly controlling the interactions between BAK/BAX and the BH3-only proteins capable of directly inducing BAK/BAX activation7,8. In addition, anti-apoptotic BCL-2 protein availability is also dictated by sensitizer/de-repressor BH3-only proteins, such as BAD (BCL-2 antagonist of cell death) or PUMA (p53 upregulated modulator of apoptosis), which bind and inhibit anti-apoptotic members7,9. As most of the anti-apoptotic BCL-2 repertoire is localized to the OMM, the cellular decision to maintain survival or induce MOMP is dictated by multiple BCL-2 family interactions at this membrane.
Large unilamellar vesicles (LUVs) are a biochemical model to explore relationships between BCL-2 family interactions and membrane permeabilization10. LUVs are comprised of defined lipids that are assembled in ratios identified in lipid composition studies from solvent extracted Xenopus mitochondria (46.5% phosphatidylcholine, 28.5% phosphatidylethanoloamine, 9% phosphatidylinositol, 9% phosphatidylserine, and 7% cardiolipin)10. This is a convenient model system to directly explore BCL-2 family function because the protein and lipid components are completely defined and tractable, which is not always the case with primary mitochondria. While cardiolipin is not usually this high throughout the OMM, this model does faithfully mimic the OMM to promote BCL-2 family function. Furthermore, a more recent modification of the above protocol allows for kinetic analyses of protein interactions and real-time measurements of membrane permeabilization, which is based on LUVs containing a polyanionic dye (ANTS: 8-aminonaphthalene-1,3,6-trisulfonic acid) and cationic quencher (DPX: p-xylene-bis-pyridinium bromide)11. As the LUVs permeabilize, ANTS and DPX diffuse apart, and a gain in fluorescence is detected. Here, commonly used recombinant BCL-2 family protein combinations and controls using the LUVs containing ANTS/DPX are described.
21 Related JoVE Articles!
3D-Neuronavigation In Vivo Through a Patient's Brain During a Spontaneous Migraine Headache
Institutions: University of Michigan School of Dentistry, University of Michigan School of Dentistry, University of Michigan, University of Michigan, University of Michigan, University of Michigan.
A growing body of research, generated primarily from MRI-based studies, shows that migraine appears to occur, and possibly endure, due to the alteration of specific neural processes in the central nervous system. However, information is lacking on the molecular impact of these changes, especially on the endogenous opioid system during migraine headaches, and neuronavigation through these changes has never been done. This study aimed to investigate, using a novel 3D immersive and interactive neuronavigation (3D-IIN) approach, the endogenous µ-opioid transmission in the brain during a migraine headache attack in vivo
. This is arguably one of the most central neuromechanisms associated with pain regulation, affecting multiple elements of the pain experience and analgesia. A 36 year-old female, who has been suffering with migraine for 10 years, was scanned in the typical headache (ictal) and nonheadache (interictal) migraine phases using Positron Emission Tomography (PET) with the selective radiotracer [11
C]carfentanil, which allowed us to measure µ-opioid receptor availability in the brain (non-displaceable binding potential - µOR BPND
). The short-life radiotracer was produced by a cyclotron and chemical synthesis apparatus on campus located in close proximity to the imaging facility. Both PET scans, interictal and ictal, were scheduled during separate mid-late follicular phases of the patient's menstrual cycle. During the ictal PET session her spontaneous headache attack reached severe intensity levels; progressing to nausea and vomiting at the end of the scan session. There were reductions in µOR BPND
in the pain-modulatory regions of the endogenous µ-opioid system during the ictal phase, including the cingulate cortex, nucleus accumbens (NAcc), thalamus (Thal), and periaqueductal gray matter (PAG); indicating that µORs were already occupied by endogenous opioids released in response to the ongoing pain. To our knowledge, this is the first time that changes in µOR BPND
during a migraine headache attack have been neuronavigated using a novel 3D approach. This method allows for interactive research and educational exploration of a migraine attack in an actual patient's neuroimaging dataset.
Medicine, Issue 88, μ-opioid, opiate, migraine, headache, pain, Positron Emission Tomography, molecular neuroimaging, 3D, neuronavigation
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4
is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1
). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6
. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8
. Using logistic regression analysis of subject scores (i.e.
pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e.
composite networks with improved discrimination of patients from healthy control subjects5,6
. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9
. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10
. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11
. These standardized values can in turn be used to assist in differential diagnosis12,13
and to assess disease progression and treatment effects at the network level7,14-16
. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
Assessment of Age-related Changes in Cognitive Functions Using EmoCogMeter, a Novel Tablet-computer Based Approach
Institutions: Freie Universität Berlin, Charité Berlin, Freie Universität Berlin, Psychiatric University Hospital Zurich.
The main goal of this study was to assess the usability of a tablet-computer-based application (EmoCogMeter) in investigating the effects of age on cognitive functions across the lifespan in a sample of 378 healthy subjects (age range 18-89 years). Consistent with previous findings we found an age-related cognitive decline across a wide range of neuropsychological domains (memory, attention, executive functions), thereby proving the usability of our tablet-based application. Regardless of prior computer experience, subjects of all age groups were able to perform the tasks without instruction or feedback from an experimenter. Increased motivation and compliance proved to be beneficial for task performance, thereby potentially increasing the validity of the results. Our promising findings underline the great clinical and practical potential of a tablet-based application for detection and monitoring of cognitive dysfunction.
Behavior, Issue 84, Neuropsychological Testing, cognitive decline, age, tablet-computer, memory, attention, executive functions
A β-glucuronidase (GUS) Based Cell Death Assay
Institutions: Texas A&M University.
We have developed a novel transient plant expression system that simultaneously expresses the reporter gene, β-glucuronidase (GUS), with putative positive or negative regulators of cell death. In this system, N. benthamiana
leaves are co-infiltrated with a 35S driven expression cassette containing the gene to be analyzed, and the GUS vector pCAMBIA 2301 using Agrobacterium
strain LBA4404 as a vehicle. Because live cells are required for GUS expression to occur, loss of GUS activity is expected when this marker gene is co-expressed with positive regulators of cell death. Equally, increased GUS activity is observed when anti-apoptotic genes are used compared to the vector control. As shown below, we have successfully used this system in our lab to analyze both pro- and anti-death players. These include the plant anti-apoptotic Bcl-2 Associated athanoGene (BAG) family, as well as, known mammalian inducers of cell death, such as BAX. Additionally, we have used this system to analyze the death function of specific truncations within proteins, which could provide clues on the possible post-translational modification/activation of these proteins. Here, we present a rapid and sensitive plant based method, as an initial step in investigating the death function of specific genes.
Plant Biology, Issue 51, Cell death, GUS, Transient expression, Nicotiana benthamiana.
F1FO ATPase Vesicle Preparation and Technique for Performing Patch Clamp Recordings of Submitochondrial Vesicle Membranes
Institutions: Yale University.
Mitochondria are involved in many important cellular functions including metabolism, survival1
, development and, calcium signaling2
. Two of the most important mitochondrial functions are related to the efficient production of ATP, the energy currency of the cell, by oxidative phosphorylation, and the mediation of signals for programmed cell death3
The enzyme primarily responsible for the production of ATP is the F1FO-ATP synthase, also called ATP synthase4-5
. In recent years, the role of mitochondria in apoptotic and necrotic cell death has received considerable attention. In apoptotic cell death, BCL-2 family proteins such as Bax enter the mitochondrial outer membrane, oligomerize and permeabilize the outer membrane, releasing pro-apoptotic factors into the cytosol6
. In classic necrotic cell death, such as that produced by ischemia or excitotoxicity in neurons, a large, poorly regulated increase in matrix calcium contributes to the opening of an inner membrane pore, the mitochondrial permeability transition pore or mPTP. This depolarizes the inner membrane and causes osmotic shifts, contributing to outer membrane rupture, release of pro-apoptotic factors, and metabolic dysfunction. Many proteins including Bcl-xL7
interact with F1FO ATP synthase, modulating its function. Bcl-xL interacts directly with the beta subunit of F1FO ATP synthase, and this interaction decreases a leak conductance within the F1FOATPasecomplex, increasing the net transport of H+ by F1FO during F1FO ATPase activity8
and thereby increasing mitochondrial efficiency. To study the activity and modulation of the ATP synthase, we isolated from rodent brain submitochondrial vesicles (SMVs) containing F1FO ATPase. The SMVs retain the structural and functional integrity of the F1FO ATPase as shown in Alavian et al
. Here, we describe a method that we have used successfully for the isolation of SMVs from rat brain and we delineate the patch clamp technique to analyze channel activity (ion leak conductance) of the SMVs.
Neuroscience, Issue 75, Medicine, Biomedical Engineering, Molecular Biology, Cellular Biology, Biochemistry, Neurobiology, Anatomy, Physiology, F1FO ATPase, mitochondria, patch clamp, electrophysiology, submitochondrial vesicles, Bcl-xL, cells, rat, animal model
An Allele-specific Gene Expression Assay to Test the Functional Basis of Genetic Associations
Institutions: University of Oxford.
The number of significant genetic associations with common complex traits is constantly increasing. However, most of these associations have not been understood at molecular level. One of the mechanisms mediating the effect of DNA variants on phenotypes is gene expression, which has been shown to be particularly relevant for complex traits1
This method tests in a cellular context the effect of specific DNA sequences on gene expression. The principle is to measure the relative abundance of transcripts arising from the two alleles of a gene, analysing cells which carry one copy of the DNA sequences associated with disease (the risk variants)2,3
. Therefore, the cells used for this method should meet two fundamental genotypic requirements: they have to be heterozygous both for DNA risk variants and for DNA markers, typically coding polymorphisms, which can distinguish transcripts based on their chromosomal origin (Figure 1). DNA risk variants and DNA markers do not need to have the same allele frequency but the phase (haplotypic) relationship of the genetic markers needs to be understood. It is also important to choose cell types which express the gene of interest. This protocol refers specifically to the procedure adopted to extract nucleic acids from fibroblasts but the method is equally applicable to other cells types including primary cells.
DNA and RNA are extracted from the selected cell lines and cDNA is generated. DNA and cDNA are analysed with a primer extension assay, designed to target the coding DNA markers4
. The primer extension assay is carried out using the MassARRAY (Sequenom)5
platform according to the manufacturer's specifications. Primer extension products are then analysed by matrix-assisted laser desorption/ionization time of-flight mass spectrometry (MALDI-TOF/MS). Because the selected markers are heterozygous they will generate two peaks on the MS profiles. The area of each peak is proportional to the transcript abundance and can be measured with a function of the MassARRAY Typer software to generate an allelic ratio (allele 1: allele 2) calculation. The allelic ratio obtained for cDNA is normalized using that measured from genomic DNA, where the allelic ratio is expected to be 1:1 to correct for technical artifacts. Markers with a normalised allelic ratio significantly different to 1 indicate that the amount of transcript generated from the two chromosomes in the same cell is different, suggesting that the DNA variants associated with the phenotype have an effect on gene expression. Experimental controls should be used to confirm the results.
Cellular Biology, Issue 45, Gene expression, regulatory variant, haplotype, association study, primer extension, MALDI-TOF mass spectrometry, single nucleotide polymorphism, allele-specific
How to Measure Cortical Folding from MR Images: a Step-by-Step Tutorial to Compute Local Gyrification Index
Institutions: University of Geneva School of Medicine, École Polytechnique Fédérale de Lausanne, University Hospital Center and University of Lausanne, Massachusetts General Hospital.
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues2
, several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry3
. These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface4
. The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index5
, a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name l
ocal Gyrification Index (l
), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion6
, our method was specifically designed to identify early defects of cortical development.
In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer
Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer
provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the l
GI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study1
. This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues7
, where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Medicine, Issue 59, neuroimaging, brain, cortical complexity, cortical development
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD).
Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g.
, working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions.
Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Assessment of Selective mRNA Translation in Mammalian Cells by Polysome Profiling
Institutions: University of Ottawa, Montreal Neurological Institute, University of Ottawa.
Regulation of protein synthesis represents a key control point in cellular response to stress. In particular, discreet RNA regulatory elements were shown to allow to selective translation of specific mRNAs, which typically encode for proteins required for a particular stress response. Identification of these mRNAs, as well as the characterization of regulatory mechanisms responsible for selective translation has been at the forefront of molecular biology for some time. Polysome profiling is a cornerstone method in these studies. The goal of polysome profiling is to capture mRNA translation by immobilizing actively translating ribosomes on different transcripts and separate the resulting polyribosomes by ultracentrifugation on a sucrose gradient, thus allowing for a distinction between highly translated transcripts and poorly translated ones. These can then be further characterized by traditional biochemical and molecular biology methods. Importantly, combining polysome profiling with high throughput genomic approaches allows for a large scale analysis of translational regulation.
Cellular Biology, Issue 92, cellular stress, translation initiation, internal ribosome entry site, polysome, RT-qPCR, gradient
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33
. To help improve this understanding, proton magnetic resonance spectroscopy (1
H-MRS) can be used as it allows the in vivo
quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41
. In fact, a recent study demonstrated that 1
H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34
. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1
H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31
. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
Development of automated imaging and analysis for zebrafish chemical screens.
Institutions: University of Pittsburgh Drug Discovery Institute, University of Pittsburgh, University of Pittsburgh, University of Pittsburgh.
We demonstrate the application of image-based high-content screening (HCS) methodology to identify small molecules that can modulate the FGF/RAS/MAPK pathway in zebrafish embryos. The zebrafish embryo is an ideal system for in vivo high-content chemical screens. The 1-day old embryo is approximately 1mm in diameter and can be easily arrayed into 96-well plates, a standard format for high throughput screening. During the first day of development, embryos are transparent with most of the major organs present, thus enabling visualization of tissue formation during embryogenesis. The complete automation of zebrafish chemical screens is still a challenge, however, particularly in the development of automated image acquisition and analysis. We previously generated a transgenic reporter line that expresses green fluorescent protein (GFP) under the control of FGF activity and demonstrated their utility in chemical screens 1
. To establish methodology for high throughput whole organism screens, we developed a system for automated imaging and analysis of zebrafish embryos at 24-48 hours post fertilization (hpf) in 96-well plates 2
. In this video we highlight the procedures for arraying transgenic embryos into multiwell plates at 24hpf and the addition of a small molecule (BCI) that hyperactivates FGF signaling 3
. The plates are incubated for 6 hours followed by the addition of tricaine to anesthetize larvae prior to automated imaging on a Molecular Devices ImageXpress Ultra laser scanning confocal HCS reader. Images are processed by Definiens Developer software using a Cognition Network Technology algorithm that we developed to detect and quantify expression of GFP in the heads of transgenic embryos. In this example we highlight the ability of the algorithm to measure dose-dependent effects of BCI on GFP reporter gene expression in treated embryos.
Cellular Biology, Issue 40, Zebrafish, Chemical Screens, Cognition Network Technology, Fibroblast Growth Factor, (E)-2-benzylidene-3-(cyclohexylamino)-2,3-dihydro-1H-inden-1-one (BCI),Tg(dusp6:d2EGFP)
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo
human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g.
Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2
. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs.
LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors.
While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Training Synesthetic Letter-color Associations by Reading in Color
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo
mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo
mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo
mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1
and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo
mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2
. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo
mutations. This is the case for autism and schizophrenia3
. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo
mutations would more frequently come from males, particularly older males4
. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo
mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo
mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
Programmed Electrical Stimulation in Mice
Institutions: Baylor College of Medicine (BCM), Baylor College of Medicine (BCM).
Genetically-modified mice have emerged as a preferable animal model to study the molecular mechanisms underlying conduction abnormalities, atrial and ventricular arrhythmias, and sudden cardiac death.1
Intracardiac pacing studies can be performed in mice using a 1.1F octapolar catheter inserted into the jugular vein, and advanced into the right atrium and ventricle. Here, we illustrate the steps involved in performing programmed electrical stimulation in mice. Surface ECG and intracardiac electrograms are recorded simultaneously in the atria, atrioventricular junction, and ventricular myocardium, whereas intracardiac pacing of the atrium is performed using an external stimulator. Thus, programmed electrical stimulation in mice provides unique opportunities to explore molecular mechanisms underlying conduction defects and cardiac arrhythmias.
JoVE Medicine, Issue 39, Arrhythmias, electrophysiology, mouse, programmed electrical stimulation
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Pyrosequencing: A Simple Method for Accurate Genotyping
Institutions: Washington University in St. Louis.
Pharmacogenetic research benefits first-hand from the abundance of information provided by the completion of the Human Genome Project. With such a tremendous amount of data available comes an explosion of genotyping methods. Pyrosequencing(R) is one of the most thorough yet simple methods to date used to analyze polymorphisms. It also has the ability to identify tri-allelic, indels, short-repeat polymorphisms, along with determining allele percentages for methylation or pooled sample assessment. In addition, there is a standardized control sequence that provides internal quality control. This method has led to rapid and efficient single-nucleotide polymorphism evaluation including many clinically relevant polymorphisms. The technique and methodology of Pyrosequencing is explained.
Cellular Biology, Issue 11, Springer Protocols, Pyrosequencing, genotype, polymorphism, SNP, pharmacogenetics, pharmacogenomics, PCR