Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
25 Related JoVE Articles!
In Situ Neutron Powder Diffraction Using Custom-made Lithium-ion Batteries
Institutions: University of Sydney, University of Wollongong, Australian Synchrotron, Australian Nuclear Science and Technology Organisation, University of Wollongong, University of New South Wales.
Li-ion batteries are widely used in portable electronic devices and are considered as promising candidates for higher-energy applications such as electric vehicles.1,2
However, many challenges, such as energy density and battery lifetimes, need to be overcome before this particular battery technology can be widely implemented in such applications.3
This research is challenging, and we outline a method to address these challenges using in situ
NPD to probe the crystal structure of electrodes undergoing electrochemical cycling (charge/discharge) in a battery. NPD data help determine the underlying structural mechanism responsible for a range of electrode properties, and this information can direct the development of better electrodes and batteries.
We briefly review six types of battery designs custom-made for NPD experiments and detail the method to construct the ‘roll-over’ cell that we have successfully used on the high-intensity NPD instrument, WOMBAT, at the Australian Nuclear Science and Technology Organisation (ANSTO). The design considerations and materials used for cell construction are discussed in conjunction with aspects of the actual in situ
NPD experiment and initial directions are presented on how to analyze such complex in situ
Physics, Issue 93, In operando, structure-property relationships, electrochemical cycling, electrochemical cells, crystallography, battery performance
Magnetic Tweezers for the Measurement of Twist and Torque
Institutions: Delft University of Technology.
Single-molecule techniques make it possible to investigate the behavior of individual biological molecules in solution in real time. These techniques include so-called force spectroscopy approaches such as atomic force microscopy, optical tweezers, flow stretching, and magnetic tweezers. Amongst these approaches, magnetic tweezers have distinguished themselves by their ability to apply torque while maintaining a constant stretching force. Here, it is illustrated how such a “conventional” magnetic tweezers experimental configuration can, through a straightforward modification of its field configuration to minimize the magnitude of the transverse field, be adapted to measure the degree of twist in a biological molecule. The resulting configuration is termed the freely-orbiting magnetic tweezers. Additionally, it is shown how further modification of the field configuration can yield a transverse field with a magnitude intermediate between that of the “conventional” magnetic tweezers and the freely-orbiting magnetic tweezers, which makes it possible to directly measure the torque stored in a biological molecule. This configuration is termed the magnetic torque tweezers. The accompanying video explains in detail how the conversion of conventional magnetic tweezers into freely-orbiting magnetic tweezers and magnetic torque tweezers can be accomplished, and demonstrates the use of these techniques. These adaptations maintain all the strengths of conventional magnetic tweezers while greatly expanding the versatility of this powerful instrument.
Bioengineering, Issue 87, magnetic tweezers, magnetic torque tweezers, freely-orbiting magnetic tweezers, twist, torque, DNA, single-molecule techniques
Laboratory Drop Towers for the Experimental Simulation of Dust-aggregate Collisions in the Early Solar System
Institutions: Technische Universität Braunschweig.
For the purpose of investigating the evolution of dust aggregates in the early Solar System, we developed two vacuum drop towers in which fragile dust aggregates with sizes up to ~10 cm and porosities up to 70% can be collided. One of the drop towers is primarily used for very low impact speeds down to below 0.01 m/sec and makes use of a double release mechanism. Collisions are recorded in stereo-view by two high-speed cameras, which fall along the glass vacuum tube in the center-of-mass frame of the two dust aggregates. The other free-fall tower makes use of an electromagnetic accelerator that is capable of gently accelerating dust aggregates to up to 5 m/sec. In combination with the release of another dust aggregate to free fall, collision speeds up to ~10 m/sec can be achieved. Here, two fixed high-speed cameras record the collision events. In both drop towers, the dust aggregates are in free fall during the collision so that they are weightless and match the conditions in the early Solar System.
Physics, Issue 88, astrophysics, planet formation, collisions, granular matter, high-speed imaging, microgravity drop tower
Identification of Protein Interaction Partners in Mammalian Cells Using SILAC-immunoprecipitation Quantitative Proteomics
Institutions: University of Cambridge.
Quantitative proteomics combined with immuno-affinity purification, SILAC immunoprecipitation, represent a powerful means for the discovery of novel protein:protein interactions. By allowing the accurate relative quantification of protein abundance in both control and test samples, true interactions may be easily distinguished from experimental contaminants. Low affinity interactions can be preserved through the use of less-stringent buffer conditions and remain readily identifiable. This protocol discusses the labeling of tissue culture cells with stable isotope labeled amino acids, transfection and immunoprecipitation of an affinity tagged protein of interest, followed by the preparation for submission to a mass spectrometry facility. This protocol then discusses how to analyze and interpret the data returned from the mass spectrometer in order to identify cellular partners interacting with a protein of interest. As an example this technique is applied to identify proteins binding to the eukaryotic translation initiation factors: eIF4AI and eIF4AII.
Biochemistry, Issue 89, mass spectrometry, tissue culture techniques, isotope labeling, SILAC, Stable Isotope Labeling of Amino Acids in Cell Culture, proteomics, Interactomics, immunoprecipitation, pulldown, eIF4A, GFP, nanotrap, orbitrap
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Improved In-gel Reductive β-Elimination for Comprehensive O-linked and Sulfo-glycomics by Mass Spectrometry
Institutions: University of Georgia, University of Georgia, Ishikawa Prefectural University.
Separation of proteins by SDS-PAGE followed by in-gel proteolytic digestion of resolved protein bands has produced high-resolution proteomic analysis of biological samples. Similar approaches, that would allow in-depth analysis of the glycans carried by glycoproteins resolved by SDS-PAGE, require special considerations in order to maximize recovery and sensitivity when using mass spectrometry (MS) as the detection method. A major hurdle to be overcome in achieving high-quality data is the removal of gel-derived contaminants that interfere with MS analysis. The sample workflow presented here is robust, efficient, and eliminates the need for in-line HPLC clean-up prior to MS. Gel pieces containing target proteins are washed in acetonitrile, water, and ethyl acetate to remove contaminants, including polymeric acrylamide fragments. O-linked glycans are released from target proteins by in-gel reductive β-elimination and recovered through robust, simple clean-up procedures. An advantage of this workflow is that it improves sensitivity for detecting and characterizing sulfated glycans. These procedures produce an efficient separation of sulfated permethylated glycans from non-sulfated (sialylated and neutral) permethylated glycans by a rapid phase-partition prior to MS analysis, and thereby enhance glycomic and sulfoglycomic analyses of glycoproteins resolved by SDS-PAGE.
Chemistry, Issue 93, glycoprotein, glycosylation, in-gel reductive β-elimination, O-linked glycan, sulfated glycan, mass spectrometry, protein ID, SDS-PAGE, glycomics, sulfoglycomics
Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat
Institutions: University of Wisconsin-Madison.
Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e.
, “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g.
, neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g
., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion.
Behavior, Issue 91,
Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational
Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults
Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls
Automating ChIP-seq Experiments to Generate Epigenetic Profiles on 10,000 HeLa Cells
Institutions: Diagenode S.A., Diagenode Inc..
Chromatin immunoprecipitation followed by next generation sequencing (ChIP-seq) is a technique of choice for studying protein-DNA interactions. ChIP-seq has been used for mapping protein-DNA interactions and allocating histones modifications. The procedure is tedious and time consuming, and one of the major limitations is the requirement for high amounts of starting material, usually millions of cells. Automation of chromatin immunoprecipitation assays is possible when the procedure is based on the use of magnetic beads. Successful automated protocols of chromatin immunoprecipitation and library preparation have been specifically designed on a commercially available robotic liquid handling system dedicated mainly to automate epigenetic assays. First, validation of automated ChIP-seq assays using antibodies directed against various histone modifications was shown, followed by optimization of the automated protocols to perform chromatin immunoprecipitation and library preparation starting with low cell numbers. The goal of these experiments is to provide a valuable tool for future epigenetic analysis of specific cell types, sub-populations, and biopsy samples.
Molecular Biology, Issue 94, Automation, chromatin immunoprecipitation, low DNA amounts, histone antibodies, sequencing, library preparation
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Genomic MRI - a Public Resource for Studying Sequence Patterns within Genomic DNA
Institutions: University of Toledo Health Science Campus.
Non-coding genomic regions in complex eukaryotes, including intergenic areas, introns, and untranslated segments of exons, are profoundly non-random in their nucleotide composition and consist of a complex mosaic of sequence patterns. These patterns include so-called Mid-Range Inhomogeneity (MRI) regions -- sequences 30-10000 nucleotides in length that are enriched by a particular base or combination of bases (e.g. (G+T)-rich, purine-rich, etc.). MRI regions are associated with unusual (non-B-form) DNA structures that are often involved in regulation of gene expression, recombination, and other genetic processes (Fedorova & Fedorov 2010). The existence of a strong fixation bias within MRI regions against mutations that tend to reduce their sequence inhomogeneity additionally supports the functionality and importance of these genomic sequences (Prakash et al.
Here we demonstrate a freely available Internet resource -- the Genomic MRI
program package -- designed for computational analysis of genomic sequences in order to find and characterize various MRI patterns within them (Bechtel et al.
2008). This package also allows generation of randomized sequences with various properties and level of correspondence to the natural input DNA sequences. The main goal of this resource is to facilitate examination of vast regions of non-coding DNA that are still scarcely investigated and await thorough exploration and recognition.
Genetics, Issue 51, bioinformatics, computational biology, genomics, non-randomness, signals, gene regulation, DNA conformation
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Flexural Rigidity Measurements of Biopolymers Using Gliding Assays
Institutions: Lawrence University.
Microtubules are cytoskeletal polymers which play a role in cell division, cell mechanics, and intracellular transport. Each of these functions requires microtubules that are stiff and straight enough to span a significant fraction of the cell diameter. As a result, the microtubule persistence length, a measure of stiffness, has been actively studied for the past two decades1
. Nonetheless, open questions remain: short microtubules are 10-50 times less stiff than long microtubules2-4
, and even long microtubules have measured persistence lengths which vary by an order of magnitude5-9
Here, we present a method to measure microtubule persistence length. The method is based on a kinesin-driven microtubule gliding assay10
. By combining sparse fluorescent labeling of individual microtubules with single particle tracking of individual fluorophores attached to the microtubule, the gliding trajectories of single microtubules are tracked with nanometer-level precision. The persistence length of the trajectories is the same as the persistence length of the microtubule under the conditions used11
. An automated tracking routine is used to create microtubule trajectories from fluorophores attached to individual microtubules, and the persistence length of this trajectory is calculated using routines written in IDL.
This technique is rapidly implementable, and capable of measuring the persistence length of 100 microtubules in one day of experimentation. The method can be extended to measure persistence length under a variety of conditions, including persistence length as a function of length along microtubules. Moreover, the analysis routines used can be extended to myosin-based acting gliding assays, to measure the persistence length of actin filaments as well.
Biophysics, Issue 69, Bioengineering, Physics, Molecular Biology, Cellular Biology, microtubule, persistence length, flexural rigidity, gliding assay, mechanics, cytoskeleton, actin
Measuring Cation Transport by Na,K- and H,K-ATPase in Xenopus Oocytes by Atomic Absorption Spectrophotometry: An Alternative to Radioisotope Assays
Institutions: Technical University of Berlin, Oregon Health & Science University.
Whereas cation transport by the electrogenic membrane transporter Na+
-ATPase can be measured by electrophysiology, the electroneutrally operating gastric H+
-ATPase is more difficult to investigate. Many transport assays utilize radioisotopes to achieve a sufficient signal-to-noise ratio, however, the necessary security measures impose severe restrictions regarding human exposure or assay design. Furthermore, ion transport across cell membranes is critically influenced by the membrane potential, which is not straightforwardly controlled in cell culture or in proteoliposome preparations. Here, we make use of the outstanding sensitivity of atomic absorption spectrophotometry (AAS) towards trace amounts of chemical elements to measure Rb+
transport by Na+
- or gastric H+
-ATPase in single cells. Using Xenopus
oocytes as expression system, we determine the amount of Rb+
) transported into the cells by measuring samples of single-oocyte homogenates in an AAS device equipped with a transversely heated graphite atomizer (THGA) furnace, which is loaded from an autosampler. Since the background of unspecific Rb+
uptake into control oocytes or during application of ATPase-specific inhibitors is very small, it is possible to implement complex kinetic assay schemes involving a large number of experimental conditions simultaneously, or to compare the transport capacity and kinetics of site-specifically mutated transporters with high precision. Furthermore, since cation uptake is determined on single cells, the flux experiments can be carried out in combination with two-electrode voltage-clamping (TEVC) to achieve accurate control of the membrane potential and current. This allowed e.g.
to quantitatively determine the 3Na+
transport stoichiometry of the Na+
-ATPase and enabled for the first time to investigate the voltage dependence of cation transport by the electroneutrally operating gastric H+
-ATPase. In principle, the assay is not limited to K+
-transporting membrane proteins, but it may work equally well to address the activity of heavy or transition metal transporters, or uptake of chemical elements by endocytotic processes.
Biochemistry, Issue 72, Chemistry, Biophysics, Bioengineering, Physiology, Molecular Biology, electrochemical processes, physical chemistry, spectrophotometry (application), spectroscopic chemical analysis (application), life sciences, temperature effects (biological, animal and plant), Life Sciences (General), Na+,K+-ATPase, H+,K+-ATPase, Cation Uptake, P-type ATPases, Atomic Absorption Spectrophotometry (AAS), Two-Electrode Voltage-Clamp, Xenopus Oocytes, Rb+ Flux, Transversely Heated Graphite Atomizer (THGA) Furnace, electrophysiology, animal model
Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data
Institutions: The Feinstein Institute for Medical Research.
The scaled subprofile model (SSM)1-4
is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1
). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6
. Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8
. Using logistic regression analysis of subject scores (i.e.
pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e.
composite networks with improved discrimination of patients from healthy control subjects5,6
. Cross-validation within the derivation set can be performed using bootstrap resampling techniques9
. Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10
. Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11
. These standardized values can in turn be used to assist in differential diagnosis12,13
and to assess disease progression and treatment effects at the network level7,14-16
. We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease.
Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
A Computer-assisted Multi-electrode Patch-clamp System
Institutions: Ecole Polytechnique Federale de Lausanne.
The patch-clamp technique is today the most well-established method for recording electrical activity from individual neurons or their subcellular compartments. Nevertheless, achieving stable recordings, even from individual cells, remains a time-consuming procedure of considerable complexity. Automation of many steps in conjunction with efficient information display can greatly assist experimentalists in performing a larger number of recordings with greater reliability and in less time. In order to achieve large-scale recordings we concluded the most efficient approach is not to fully automatize the process but to simplify the experimental steps and reduce the chances of human error while efficiently incorporating the experimenter's experience and visual feedback. With these goals in mind we developed a computer-assisted system which centralizes all the controls necessary for a multi-electrode patch-clamp experiment in a single interface, a commercially available wireless gamepad, while displaying experiment related information and guidance cues on the computer screen. Here we describe the different components of the system which allowed us to reduce the time required for achieving the recording configuration and substantially increase the chances of successfully recording large numbers of neurons simultaneously.
Neuroscience, Issue 80, Patch-clamp, automatic positioning, whole-cell, neuronal recording, in vitro, multi-electrode
Generation of Comprehensive Thoracic Oncology Database - Tool for Translational Research
Institutions: University of Chicago, University of Chicago, Northshore University Health Systems, University of Chicago, University of Chicago, University of Chicago.
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
Medicine, Issue 47, Database, Thoracic oncology, Bioinformatics, Biorepository, Microsoft Access, Proteomics, Genomics
Basics of Multivariate Analysis in Neuroimaging Data
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9
. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
Measuring the Strength of Mice
Institutions: University of Oxford .
devised the inverted screen test and published it in 1964. It is a test of muscle strength using all four limbs. Most normal mice easily score maximum on this task; it is a quick but insensitive gross screen, and the weights test described in this article will provide a finer measure of muscular strength.
There are also several strain gauge-based pieces of apparatus available commercially that will provide more graded data than the inverted screen test, but their cost may put them beyond the reach of many laboratories which do not specialize in strength testing. Hence in 2000 a cheap and simple apparatus was devised by the author. It consists of a series of chain links of increasing length, attached to a "fur collector" a ball of fine wire mesh sold for preventing limescale build up in hard water areas. An accidental observation revealed that mice could grip these very tightly, so they proved ideal as a grip point for a weight-lifting apparatus. A common fault with commercial strength meters is that the bar or other grip feature is not thin enough for mice to exert a maximum grip. As a general rule, the thinner the wire or bar, the better a mouse can grip with its small claws.
This is a pure test of strength, although as for any test motivational factors could potentially play a role. The use of scale collectors, however, seems to minimize motivational problems as the motivation appears to be very high for most normal young adult mice.
Medicine, Issue 76, Neuroscience, Neurobiology, Anatomy, Physiology, Behavior, Psychology, Mice, strength, motor, inverted screen, weight lifting, animal model
Bringing the Visible Universe into Focus with Robo-AO
Institutions: California Institute of Technology, California Institute of Technology, University of Toronto, Inter-University Centre for Astronomy & Astrophysics, Observatories of the Carnegie Institution for Science, Weizmann Institute of Science.
The angular resolution of ground-based optical telescopes is limited by the degrading effects of the turbulent atmosphere. In the absence of an atmosphere, the angular resolution of a typical telescope is limited only by diffraction, i.e.
, the wavelength of interest, λ
, divided by the size of its primary mirror's aperture, D
. For example, the Hubble Space Telescope (HST), with a 2.4-m primary mirror, has an angular resolution at visible wavelengths of ~0.04 arc seconds. The atmosphere is composed of air at slightly different temperatures, and therefore different indices of refraction, constantly mixing. Light waves are bent as they pass through the inhomogeneous atmosphere. When a telescope on the ground focuses these light waves, instantaneous images appear fragmented, changing as a function of time. As a result, long-exposure images acquired using ground-based telescopes - even telescopes with four times the diameter of HST - appear blurry and have an angular resolution of roughly 0.5 to 1.5 arc seconds at best.
Astronomical adaptive-optics systems compensate for the effects of atmospheric turbulence. First, the shape of the incoming non-planar wave is determined using measurements of a nearby bright star by a wavefront sensor. Next, an element in the optical system, such as a deformable mirror, is commanded to correct the shape of the incoming light wave. Additional corrections are made at a rate sufficient to keep up with the dynamically changing atmosphere through which the telescope looks, ultimately producing diffraction-limited images.
The fidelity of the wavefront sensor measurement is based upon how well the incoming light is spatially and temporally sampled1
. Finer sampling requires brighter reference objects. While the brightest stars can serve as reference objects for imaging targets from several to tens of arc seconds away in the best conditions, most interesting astronomical targets do not have sufficiently bright stars nearby. One solution is to focus a high-power laser beam in the direction of the astronomical target to create an artificial reference of known shape, also known as a 'laser guide star'. The Robo-AO laser adaptive optics system2,3
employs a 10-W ultraviolet laser focused at a distance of 10 km to generate a laser guide star. Wavefront sensor measurements of the laser guide star drive the adaptive optics correction resulting in diffraction-limited images that have an angular resolution of ~0.1 arc seconds on a 1.5-m telescope.
Physics, Issue 72, Astronomy, Mechanical Engineering, Astrophysics, Optics, Adaptive optics, lasers, wavefront sensing, robotics, stars, galaxies, imaging, supernova, telescopes
Making MR Imaging Child's Play - Pediatric Neuroimaging Protocol, Guidelines and Procedure
Institutions: Children’s Hospital Boston, University of Zurich, Harvard, Harvard Medical School.
Within the last decade there has been an increase in the use of structural and functional magnetic resonance imaging (fMRI) to investigate the neural basis of human perception, cognition and behavior 1, 2
. Moreover, this non-invasive imaging method has grown into a tool for clinicians and researchers to explore typical and atypical brain development. Although advances in neuroimaging tools and techniques are apparent, (f)MRI in young pediatric populations remains relatively infrequent 2
. Practical as well as technical challenges when imaging children present clinicians and research teams with a unique set of problems 3, 2
. To name just a few, the child participants are challenged by a need for motivation, alertness and cooperation. Anxiety may be an additional factor to be addressed. Researchers or clinicians need to consider time constraints, movement restriction, scanner background noise and unfamiliarity with the MR scanner environment2,4-10
. A progressive use of functional and structural neuroimaging in younger age groups, however, could further add to our understanding of brain development. As an example, several research groups are currently working towards early detection of developmental disorders, potentially even before children present associated behavioral characteristics e.g.11
. Various strategies and techniques have been reported as a means to ensure comfort and cooperation of young children during neuroimaging sessions. Play therapy 12
, behavioral approaches 13, 14,15, 16-18
and simulation 19
, the use of mock scanner areas 20,21
, basic relaxation 22
and a combination of these techniques 23
have all been shown to improve the participant's compliance and thus MRI data quality. Even more importantly, these strategies have proven to increase the comfort of families and children involved 12
. One of the main advances of such techniques for the clinical practice is the possibility of avoiding sedation or general anesthesia (GA) as a way to manage children's compliance during MR imaging sessions 19,20
. In the current video report, we present a pediatric neuroimaging protocol with guidelines and procedures that have proven to be successful to date in young children.
Neuroscience, Issue 29, fMRI, imaging, development, children, pediatric neuroimaging, cognitive development, magnetic resonance imaging, pediatric imaging protocol, patient preparation, mock scanner