Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
24 Related JoVE Articles!
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light.
We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
High-throughput Fluorometric Measurement of Potential Soil Extracellular Enzyme Activities
Institutions: Colorado State University, Oak Ridge National Laboratory, University of Colorado.
Microbes in soils and other environments produce extracellular enzymes to depolymerize and hydrolyze organic macromolecules so that they can be assimilated for energy and nutrients. Measuring soil microbial enzyme activity is crucial in understanding soil ecosystem functional dynamics. The general concept of the fluorescence enzyme assay is that synthetic C-, N-, or P-rich substrates bound with a fluorescent dye are added to soil samples. When intact, the labeled substrates do not fluoresce. Enzyme activity is measured as the increase in fluorescence as the fluorescent dyes are cleaved from their substrates, which allows them to fluoresce. Enzyme measurements can be expressed in units of molarity or activity. To perform this assay, soil slurries are prepared by combining soil with a pH buffer. The pH buffer (typically a 50 mM sodium acetate or 50 mM Tris buffer), is chosen for the buffer's particular acid dissociation constant (pKa) to best match the soil sample pH. The soil slurries are inoculated with a nonlimiting amount of fluorescently labeled (i.e.
C-, N-, or P-rich) substrate. Using soil slurries in the assay serves to minimize limitations on enzyme and substrate diffusion. Therefore, this assay controls for differences in substrate limitation, diffusion rates, and soil pH conditions; thus detecting potential enzyme activity rates as a function of the difference in enzyme concentrations (per sample).
Fluorescence enzyme assays are typically more sensitive than spectrophotometric (i.e.
colorimetric) assays, but can suffer from interference caused by impurities and the instability of many fluorescent compounds when exposed to light; so caution is required when handling fluorescent substrates. Likewise, this method only assesses potential enzyme activities under laboratory conditions when substrates are not limiting. Caution should be used when interpreting the data representing cross-site comparisons with differing temperatures or soil types, as in situ
soil type and temperature can influence enzyme kinetics.
Environmental Sciences, Issue 81, Ecological and Environmental Phenomena, Environment, Biochemistry, Environmental Microbiology, Soil Microbiology, Ecology, Eukaryota, Archaea, Bacteria, Soil extracellular enzyme activities (EEAs), fluorometric enzyme assays, substrate degradation, 4-methylumbelliferone (MUB), 7-amino-4-methylcoumarin (MUC), enzyme temperature kinetics, soil
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Design and Operation of a Continuous 13C and 15N Labeling Chamber for Uniform or Differential, Metabolic and Structural, Plant Isotope Labeling
Institutions: Colorado State University, USDA-ARS, Colorado State University.
Tracing rare stable isotopes from plant material through the ecosystem provides the most sensitive information about ecosystem processes; from CO2
fluxes and soil organic matter formation to small-scale stable-isotope biomarker probing. Coupling multiple stable isotopes such as 13
C with 15
O or 2
H has the potential to reveal even more information about complex stoichiometric relationships during biogeochemical transformations. Isotope labeled plant material has been used in various studies of litter decomposition and soil organic matter formation1-4
. From these and other studies, however, it has become apparent that structural components of plant material behave differently than metabolic components (i.e
. leachable low molecular weight compounds) in terms of microbial utilization and long-term carbon storage5-7
. The ability to study structural and metabolic components separately provides a powerful new tool for advancing the forefront of ecosystem biogeochemical studies. Here we describe a method for producing 13
C and 15
N labeled plant material that is either uniformly labeled throughout the plant or differentially labeled in structural and metabolic plant components.
Here, we present the construction and operation of a continuous 13
C and 15
N labeling chamber that can be modified to meet various research needs. Uniformly labeled plant material is produced by continuous labeling from seedling to harvest, while differential labeling is achieved by removing the growing plants from the chamber weeks prior to harvest. Representative results from growing Andropogon gerardii
Kaw demonstrate the system's ability to efficiently label plant material at the targeted levels. Through this method we have produced plant material with a 4.4 atom%13
C and 6.7 atom%15
N uniform plant label, or material that is differentially labeled by up to 1.29 atom%13
C and 0.56 atom%15
N in its metabolic and structural components (hot water extractable and hot water residual components, respectively). Challenges lie in maintaining proper temperature, humidity, CO2
concentration, and light levels in an airtight 13
atmosphere for successful plant production. This chamber description represents a useful research tool to effectively produce uniformly or differentially multi-isotope labeled plant material for use in experiments on ecosystem biogeochemical cycling.
Environmental Sciences, Issue 83, 13C, 15N, plant, stable isotope labeling, Andropogon gerardii, metabolic compounds, structural compounds, hot water extraction
A Strategy for Sensitive, Large Scale Quantitative Metabolomics
Institutions: Cornell University, Cornell University.
Metabolite profiling has been a valuable asset in the study of metabolism in health and disease. However, current platforms have different limiting factors, such as labor intensive sample preparations, low detection limits, slow scan speeds, intensive method optimization for each metabolite, and the inability to measure both positively and negatively charged ions in single experiments. Therefore, a novel metabolomics protocol could advance metabolomics studies. Amide-based hydrophilic chromatography enables polar metabolite analysis without any chemical derivatization. High resolution MS using the Q-Exactive (QE-MS) has improved ion optics, increased scan speeds (256 msec at resolution 70,000), and has the capability of carrying out positive/negative switching. Using a cold methanol extraction strategy, and coupling an amide column with QE-MS enables robust detection of 168 targeted polar metabolites and thousands of additional features simultaneously. Data processing is carried out with commercially available software in a highly efficient way, and unknown features extracted from the mass spectra can be queried in databases.
Chemistry, Issue 87, high-resolution mass spectrometry, metabolomics, positive/negative switching, low mass calibration, Orbitrap
Polysome Fractionation and Analysis of Mammalian Translatomes on a Genome-wide Scale
Institutions: McGill University, Karolinska Institutet, McGill University.
mRNA translation plays a central role in the regulation of gene expression and represents the most energy consuming process in mammalian cells. Accordingly, dysregulation of mRNA translation is considered to play a major role in a variety of pathological states including cancer. Ribosomes also host chaperones, which facilitate folding of nascent polypeptides, thereby modulating function and stability of newly synthesized polypeptides. In addition, emerging data indicate that ribosomes serve as a platform for a repertoire of signaling molecules, which are implicated in a variety of post-translational modifications of newly synthesized polypeptides as they emerge from the ribosome, and/or components of translational machinery. Herein, a well-established method of ribosome fractionation using sucrose density gradient centrifugation is described. In conjunction with the in-house developed “anota” algorithm this method allows direct determination of differential translation of individual mRNAs on a genome-wide scale. Moreover, this versatile protocol can be used for a variety of biochemical studies aiming to dissect the function of ribosome-associated protein complexes, including those that play a central role in folding and degradation of newly synthesized polypeptides.
Biochemistry, Issue 87, Cells, Eukaryota, Nutritional and Metabolic Diseases, Neoplasms, Metabolic Phenomena, Cell Physiological Phenomena, mRNA translation, ribosomes,
protein synthesis, genome-wide analysis, translatome, mTOR, eIF4E, 4E-BP1
The Use of Magnetic Resonance Spectroscopy as a Tool for the Measurement of Bi-hemispheric Transcranial Electric Stimulation Effects on Primary Motor Cortex Metabolism
Institutions: University of Montréal, McGill University, University of Minnesota.
Transcranial direct current stimulation (tDCS) is a neuromodulation technique that has been increasingly used over the past decade in the treatment of neurological and psychiatric disorders such as stroke and depression. Yet, the mechanisms underlying its ability to modulate brain excitability to improve clinical symptoms remains poorly understood 33
. To help improve this understanding, proton magnetic resonance spectroscopy (1
H-MRS) can be used as it allows the in vivo
quantification of brain metabolites such as γ-aminobutyric acid (GABA) and glutamate in a region-specific manner 41
. In fact, a recent study demonstrated that 1
H-MRS is indeed a powerful means to better understand the effects of tDCS on neurotransmitter concentration 34
. This article aims to describe the complete protocol for combining tDCS (NeuroConn MR compatible stimulator) with 1
H-MRS at 3 T using a MEGA-PRESS sequence. We will describe the impact of a protocol that has shown great promise for the treatment of motor dysfunctions after stroke, which consists of bilateral stimulation of primary motor cortices 27,30,31
. Methodological factors to consider and possible modifications to the protocol are also discussed.
Neuroscience, Issue 93, proton magnetic resonance spectroscopy, transcranial direct current stimulation, primary motor cortex, GABA, glutamate, stroke
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Setting Limits on Supersymmetry Using Simplified Models
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Semi-automated Optical Heartbeat Analysis of Small Hearts
Institutions: The Sanford Burnham Institute for Medical Research, The Sanford Burnham Institute for Medical Research, San Diego State University.
We have developed a method for analyzing high speed optical recordings from Drosophila
, zebrafish and embryonic mouse hearts (Fink, et. al., 2009). Our Semi-automatic Optical Heartbeat Analysis (SOHA) uses a novel movement detection algorithm that is able to detect cardiac movements associated with individual contractile and relaxation events. The program provides a host of physiologically relevant readouts including systolic and diastolic intervals, heart rate, as well as qualitative and quantitative measures of heartbeat arrhythmicity. The program also calculates heart diameter measurements during both diastole and systole from which fractional shortening and fractional area changes are calculated. Output is provided as a digital file compatible with most spreadsheet programs. Measurements are made for every heartbeat in a record increasing the statistical power of the output. We demonstrate each of the steps where user input is required and show the application of our methodology to the analysis of heart function in all three genetically tractable heart models.
Physiology, Issue 31, Drosophila, zebrafish, mouse, heart, myosin, dilated, restricted, cardiomyopathy, KCNQ, movement detection
Mapping the After-effects of Theta Burst Stimulation on the Human Auditory Cortex with Functional Imaging
Institutions: McGill University .
Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing1
. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function 2
An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions 3
. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated 4
. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity.
Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS 5-7
. However, this online combination has many technical problems, including the static artifacts resulting from the presence of the TMS coil in the scanner room, or the effects of TMS pulses on the process of MR image formation. But more importantly, the loud acoustic noise induced by TMS (increased compared with standard use because of the resonance of the scanner bore) and the increased TMS coil vibrations (caused by the strong mechanical forces due to the static magnetic field of the MR scanner) constitute a crucial problem when studying auditory processing.
This is one reason why fMRI was carried out before and after TMS in the present study. Similar approaches have been used to target the motor cortex 8,9
, premotor cortex 10
, primary somatosensory cortex 11,12
and language-related areas 13
, but so far no combined TMS-fMRI study has investigated the auditory cortex. The purpose of this article is to provide details concerning the protocol and considerations necessary to successfully combine these two neuroscientific tools to investigate auditory processing.
Previously we showed that repetitive TMS (rTMS) at high and low frequencies (resp. 10 Hz and 1 Hz) applied over the auditory cortex modulated response time (RT) in a melody discrimination task 2
. We also showed that RT modulation was correlated with functional connectivity in the auditory network assessed using fMRI: the higher the functional connectivity between left and right auditory cortices during task performance, the higher the facilitatory effect (i.e.
decreased RT) observed with rTMS. However those findings were mainly correlational, as fMRI was performed before rTMS. Here, fMRI was carried out before and immediately after TMS to provide direct measures of the functional organization of the auditory cortex, and more specifically of the plastic reorganization of the auditory neural network occurring after the neural intervention provided by TMS.
Combined fMRI and TMS applied over the auditory cortex should enable a better understanding of brain mechanisms of auditory processing, providing physiological information about functional effects of TMS. This knowledge could be useful for many cognitive neuroscience applications, as well as for optimizing therapeutic applications of TMS, particularly in auditory-related disorders.
Neuroscience, Issue 67, Physiology, Physics, Theta burst stimulation, functional magnetic resonance imaging, MRI, auditory cortex, frameless stereotaxy, sound, transcranial magnetic stimulation
Annotation of Plant Gene Function via Combined Genomics, Metabolomics and Informatics
Given the ever expanding number of model plant species for which complete genome sequences are available and the abundance of bio-resources such as knockout mutants, wild accessions and advanced breeding populations, there is a rising burden for gene functional annotation. In this protocol, annotation of plant gene function using combined co-expression gene analysis, metabolomics and informatics is provided (Figure 1
). This approach is based on the theory of using target genes of known function to allow the identification of non-annotated genes likely to be involved in a certain metabolic process, with the identification of target compounds via metabolomics. Strategies are put forward for applying this information on populations generated by both forward and reverse genetics approaches in spite of none of these are effortless. By corollary this approach can also be used as an approach to characterise unknown peaks representing new or specific secondary metabolites in the limited tissues, plant species or stress treatment, which is currently the important trial to understanding plant metabolism.
Plant Biology, Issue 64, Genetics, Bioinformatics, Metabolomics, Plant metabolism, Transcriptome analysis, Functional annotation, Computational biology, Plant biology, Theoretical biology, Spectroscopy and structural analysis
Electron Cryotomography of Bacterial Cells
Institutions: California Institute of Technology - Caltech, California Institute of Technology - Caltech.
While much is already known about the basic metabolism of bacterial cells, many fundamental questions are still surprisingly unanswered, including for instance how they generate and maintain specific cell shapes, establish polarity, segregate their genomes, and divide. In order to understand these phenomena, imaging technologies are needed that bridge the resolution gap between fluorescence light microscopy and higher-resolution methods such as X-ray crystallography and NMR spectroscopy.
Electron cryotomography (ECT) is an emerging technology that does just this, allowing the ultrastructure of cells to be visualized in a near-native state, in three dimensions (3D), with "macromolecular" resolution (~4nm).1, 2
In ECT, cells are imaged in a vitreous, "frozen-hydrated" state in a cryo transmission electron microscope (cryoTEM) at low temperature (< -180°C). For slender cells (up to ~500 nm in thickness3
), intact cells are plunge-frozen within media across EM grids in cryogens such as ethane or ethane/propane mixtures. Thicker cells and biofilms can also be imaged in a vitreous state by first "high-pressure freezing" and then, "cryo-sectioning" them. A series of two-dimensional projection images are then collected through the sample as it is incrementally tilted along one or two axes. A three-dimensional reconstruction, or "tomogram" can then be calculated from the images. While ECT requires expensive instrumentation, in recent years, it has been used in a few labs to reveal the structures of various external appendages, the structures of different cell envelopes, the positions and structures of cytoskeletal filaments, and the locations and architectures of large macromolecular assemblies such as flagellar motors, internal compartments and chemoreceptor arrays.1, 2
In this video article we illustrate how to image cells with ECT, including the processes of sample preparation, data collection, tomogram reconstruction, and interpretation of the results through segmentation and in some cases correlation with light microscopy.
Cellular Biology, Issue 39, Electron cryotomography, microbiology, bacteria, electron microscopy
Modeling Biological Membranes with Circuit Boards and Measuring Electrical Signals in Axons: Student Laboratory Exercises
Institutions: University of Kentucky, University of Toronto.
This is a demonstration of how electrical models can be used to characterize biological membranes. This exercise also introduces biophysical terminology used in electrophysiology. The same equipment is used in the membrane model as on live preparations. Some properties of an isolated nerve cord are investigated: nerve action potentials, recruitment of neurons, and responsiveness of the nerve cord to environmental factors.
Basic Protocols, Issue 47, Invertebrate, Crayfish, Modeling, Student laboratory, Nerve cord
Genomic MRI - a Public Resource for Studying Sequence Patterns within Genomic DNA
Institutions: University of Toledo Health Science Campus.
Non-coding genomic regions in complex eukaryotes, including intergenic areas, introns, and untranslated segments of exons, are profoundly non-random in their nucleotide composition and consist of a complex mosaic of sequence patterns. These patterns include so-called Mid-Range Inhomogeneity (MRI) regions -- sequences 30-10000 nucleotides in length that are enriched by a particular base or combination of bases (e.g. (G+T)-rich, purine-rich, etc.). MRI regions are associated with unusual (non-B-form) DNA structures that are often involved in regulation of gene expression, recombination, and other genetic processes (Fedorova & Fedorov 2010). The existence of a strong fixation bias within MRI regions against mutations that tend to reduce their sequence inhomogeneity additionally supports the functionality and importance of these genomic sequences (Prakash et al.
Here we demonstrate a freely available Internet resource -- the Genomic MRI
program package -- designed for computational analysis of genomic sequences in order to find and characterize various MRI patterns within them (Bechtel et al.
2008). This package also allows generation of randomized sequences with various properties and level of correspondence to the natural input DNA sequences. The main goal of this resource is to facilitate examination of vast regions of non-coding DNA that are still scarcely investigated and await thorough exploration and recognition.
Genetics, Issue 51, bioinformatics, computational biology, genomics, non-randomness, signals, gene regulation, DNA conformation
Measuring Blood Pressure in Mice using Volume Pressure Recording, a Tail-cuff Method
Institutions: University of Kentucky.
The CODA 8-Channel High Throughput Non-Invasive Blood Pressure system measures the blood pressure in up to 8 mice or rats simultaneously. The CODA tail-cuff system uses Volume Pressure Recording (VPR) to measure the blood pressure by determining the tail blood volume. A specially designed differential pressure transducer and an occlusion tail-cuff measure the total blood volume in the tail without the need to obtain the individual pulse signal. Special attention is afforded to the length of the occlusion cuff in order to derive the most accurate blood pressure readings. VPR can easily obtain readings on dark-skinned rodents, such as C57BL6 mice and is MRI compatible. The CODA system provides you with measurements of six (6) different blood pressure parameters; systolic and diastolic blood pressure, heart rate, mean blood pressure, tail blood flow, and tail blood volume. Measurements can be made on either awake or anesthetized mice or rats. The CODA system includes a controller, laptop computer, software, cuffs, animal holders, infrared warming pads, and an infrared thermometer. There are seven different holder sizes for mice as small as 8 grams to rats as large as 900 grams.
Medicine, Issue 27, blood pressure, systolic, diastolic, tail-cuff, mouse, rat
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Enabling High Grayscale Resolution Displays and Accurate Response Time Measurements on Conventional Computers
Institutions: The Ohio State University, University of Southern California, University of Southern California, University of Southern California, The Ohio State University.
Display systems based on conventional computer graphics cards are capable of generating images with 8-bit gray level resolution. However, most experiments in vision research require displays with more than 12 bits of luminance resolution. Several solutions are available. Bit++ 1
and DataPixx 2
use the Digital Visual Interface (DVI) output from graphics cards and high resolution (14 or 16-bit) digital-to-analog converters to drive analog display devices. The VideoSwitcher 3
described here combines analog video signals from the red and blue channels of graphics cards with different weights using a passive resister network 4
and an active circuit to deliver identical video signals to the three channels of color monitors. The method provides an inexpensive way to enable high-resolution monochromatic displays using conventional graphics cards and analog monitors. It can also provide trigger signals that can be used to mark stimulus onsets, making it easy to synchronize visual displays with physiological recordings or response time measurements.
Although computer keyboards and mice are frequently used in measuring response times (RT), the accuracy of these measurements is quite low. The RTbox is a specialized hardware and software solution for accurate RT measurements. Connected to the host computer through a USB connection, the driver of the RTbox is compatible with all conventional operating systems. It uses a microprocessor and high-resolution clock to record the identities and timing of button events, which are buffered until the host computer retrieves them. The recorded button events are not affected by potential timing uncertainties or biases associated with data transmission and processing in the host computer. The asynchronous storage greatly simplifies the design of user programs. Several methods are available to synchronize the clocks of the RTbox and the host computer. The RTbox can also receive external triggers and be used to measure RT with respect to external events.
Both VideoSwitcher and RTbox are available for users to purchase. The relevant information and many demonstration programs can be found at https://lobes.usc.edu/.
Neuroscience, Issue 60, VideoSwitcher, Visual stimulus, Luminance resolution, Contrast, Trigger, RTbox, Response time
Analyzing and Building Nucleic Acid Structures with 3DNA
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at https://w3dna.rutgers.edu, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
Designing and Implementing Nervous System Simulations on LEGO Robots
Institutions: Northeastern University, Bremen University of Applied Sciences.
We present a method to use the commercially available LEGO Mindstorms NXT robotics platform to test systems level neuroscience hypotheses. The first step of the method is to develop a nervous system simulation of specific reflexive behaviors of an appropriate model organism; here we use the American Lobster. Exteroceptive reflexes mediated by decussating (crossing) neural connections can explain an animal's taxis towards or away from a stimulus as described by Braitenberg and are particularly well suited for investigation using the NXT platform.1
The nervous system simulation is programmed using LabVIEW software on the LEGO Mindstorms platform. Once the nervous system is tuned properly, behavioral experiments are run on the robot and on the animal under identical environmental conditions. By controlling the sensory milieu experienced by the specimens, differences in behavioral outputs can be observed. These differences may point to specific deficiencies in the nervous system model and serve to inform the iteration of the model for the particular behavior under study. This method allows for the experimental manipulation of electronic nervous systems and serves as a way to explore neuroscience hypotheses specifically regarding the neurophysiological basis of simple innate reflexive behaviors. The LEGO Mindstorms NXT kit provides an affordable and efficient platform on which to test preliminary biomimetic robot control schemes. The approach is also well suited for the high school classroom to serve as the foundation for a hands-on inquiry-based biorobotics curriculum.
Neuroscience, Issue 75, Neurobiology, Bioengineering, Behavior, Mechanical Engineering, Computer Science, Marine Biology, Biomimetics, Marine Science, Neurosciences, Synthetic Biology, Robotics, robots, Modeling, models, Sensory Fusion, nervous system, Educational Tools, programming, software, lobster, Homarus americanus, animal model
Molecular Evolution of the Tre Recombinase
Institutions: Max Plank Institute for Molecular Cell Biology and Genetics, Dresden.
Here we report the generation of Tre recombinase through directed, molecular evolution. Tre recombinase recognizes a pre-defined target sequence within the LTR sequences of the HIV-1 provirus, resulting in the excision and eradication of the provirus from infected human cells.
We started with Cre, a 38-kDa recombinase, that recognizes a 34-bp double-stranded DNA sequence known as loxP. Because Cre can effectively eliminate genomic sequences, we set out to tailor a recombinase that could remove the sequence between the 5'-LTR and 3'-LTR of an integrated HIV-1 provirus. As a first step we identified sequences within the LTR sites that were similar to loxP and tested for recombination activity. Initially Cre and mutagenized Cre libraries failed to recombine the chosen loxLTR sites of the HIV-1 provirus. As the start of any directed molecular evolution process requires at least residual activity, the original asymmetric loxLTR sequences were split into subsets and tested again for recombination activity. Acting as intermediates, recombination activity was shown with the subsets. Next, recombinase libraries were enriched through reiterative evolution cycles. Subsequently, enriched libraries were shuffled and recombined. The combination of different mutations proved synergistic and recombinases were created that were able to recombine loxLTR1 and loxLTR2. This was evidence that an evolutionary strategy through intermediates can be successful. After a total of 126 evolution cycles individual recombinases were functionally and structurally analyzed. The most active recombinase -- Tre -- had 19 amino acid changes as compared to Cre. Tre recombinase was able to excise the HIV-1 provirus from the genome HIV-1 infected HeLa cells (see "HIV-1 Proviral DNA Excision Using an Evolved Recombinase", Hauber J., Heinrich-Pette-Institute for Experimental Virology and Immunology, Hamburg, Germany). While still in its infancy, directed molecular evolution will allow the creation of custom enzymes that will serve as tools of "molecular surgery" and molecular medicine.
Cell Biology, Issue 15, HIV-1, Tre recombinase, Site-specific recombination, molecular evolution