Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
19 Related JoVE Articles!
The ITS2 Database
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1
and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8
The ITS2 Database9
presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11
. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12
(direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13
. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold.
The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14
search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16
for multiple sequence-structure alignment calculation and Neighbor Joining18
tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure.
In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga
, which creates entry points for infection by one of the Neonectria
species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1
. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2
, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
A Hybrid DNA Extraction Method for the Qualitative and Quantitative Assessment of Bacterial Communities from Poultry Production Samples
Institutions: USDA-Agricultural Research Service, USDA-Agricultural Research Service, Oregon State University, University of Georgia, Northern Arizona University.
The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g.,
microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g.,
fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the “gold standard” enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples.
Molecular Biology, Issue 94, DNA extraction, poultry, environmental, feces, litter, semi-automated, microbiomics, qPCR
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO2
) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Automated Interactive Video Playback for Studies of Animal Communication
Institutions: Texas A&M University (TAMU), Texas A&M University (TAMU).
Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni
. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche
) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni
females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.
Neuroscience, Issue 48, Computer animation, visual communication, mate choice, Xiphophorus birchmanni, tracking
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Technique for Studying Arthropod and Microbial Communities within Tree Tissues
Institutions: Northern Arizona University, Acoustic Ecology Institute.
Phloem tissues of pine are habitats for many thousands of organisms. Arthropods and microbes use phloem and cambium tissues to seek mates, lay eggs, rear young, feed, or hide from natural enemies or harsh environmental conditions outside of the tree. Organisms that persist within the phloem habitat are difficult to observe given their location under bark. We provide a technique to preserve intact phloem and prepare it for experimentation with invertebrates and microorganisms. The apparatus is called a ‘phloem sandwich’ and allows for the introduction and observation of arthropods, microbes, and other organisms. This technique has resulted in a better understanding of the feeding behaviors, life-history traits, reproduction, development, and interactions of organisms within tree phloem. The strengths of this technique include the use of inexpensive materials, variability in sandwich size, flexibility to re-open the sandwich or introduce multiple organisms through drilled holes, and the preservation and maintenance of phloem integrity. The phloem sandwich is an excellent educational tool for scientific discovery in both K-12 science courses and university research laboratories.
Environmental Sciences, Issue 93, phloem sandwich, pine, bark beetles, mites, acoustics, phloem
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
A Venturi Effect Can Help Cure Our Trees
Institutions: Unversity of Padova.
In woody plants, xylem sap moves upwards through the vessels due to a decreasing gradient of water potential from the groundwater to the foliage. According to these factors and their dynamics, small amounts of sap-compatible liquids (i.e.
pesticides) can be injected into the xylem system, reaching their target from inside. This endotherapic method, called "trunk injection" or "trunk infusion" (depending on whether the user supplies an external pressure or not), confines the applied chemicals only within the target tree, thereby making it particularly useful in urban situations. The main factors limiting wider use of the traditional drilling methods are related to negative side effects of the holes that must be drilled around the trunk circumference in order to gain access to the xylem vessels beneath the bark.
The University of Padova (Italy) recently developed a manual, drill-free instrument with a small, perforated blade that enters the trunk by separating the woody fibers with minimal friction. Furthermore, the lenticular shaped blade reduces the vessels' cross section, increasing sap velocity and allowing the natural uptake of an external liquid up to the leaves, when transpiration rate is substantial. Ports partially close soon after the removal of the blade due to the natural elasticity and turgidity of the plant tissues, and the cambial activity completes the healing process in few weeks.
Environmental Sciences, Issue 80, Trunk injection, systemic injection, xylematic injection, endotherapy, sap flow, Bernoulli principle, plant diseases, pesticides, desiccants
The Analysis of Purkinje Cell Dendritic Morphology in Organotypic Slice Cultures
Institutions: University of Basel.
Purkinje cells are an attractive model system for studying dendritic development, because they have an impressive dendritic tree which is strictly oriented in the sagittal plane and develops mostly in the postnatal period in small rodents 3
. Furthermore, several antibodies are available which selectively and intensively label Purkinje cells including all processes, with anti-Calbindin D28K being the most widely used. For viewing of dendrites in living cells, mice expressing EGFP selectively in Purkinje cells 11
are available through Jackson labs. Organotypic cerebellar slice cultures cells allow easy experimental manipulation of Purkinje cell dendritic development because most of the dendritic expansion of the Purkinje cell dendritic tree is actually taking place during the culture period 4
. We present here a short, reliable and easy protocol for viewing and analyzing the dendritic morphology of Purkinje cells grown in organotypic cerebellar slice cultures. For many purposes, a quantitative evaluation of the Purkinje cell dendritic tree is desirable. We focus here on two parameters, dendritic tree size and branch point numbers, which can be rapidly and easily determined from anti-calbindin stained cerebellar slice cultures. These two parameters yield a reliable and sensitive measure of changes of the Purkinje cell dendritic tree. Using the example of treatments with the protein kinase C (PKC) activator PMA and the metabotropic glutamate receptor 1 (mGluR1) we demonstrate how differences in the dendritic development are visualized and quantitatively assessed. The combination of the presence of an extensive dendritic tree, selective and intense immunostaining methods, organotypic slice cultures which cover the period of dendritic growth and a mouse model with Purkinje cell specific EGFP expression make Purkinje cells a powerful model system for revealing the mechanisms of dendritic development.
Neuroscience, Issue 61, dendritic development, dendritic branching, cerebellum, Purkinje cells
Efficient Agroinfiltration of Plants for High-level Transient Expression of Recombinant Proteins
Institutions: Arizona State University .
Mammalian cell culture is the major platform for commercial production of human vaccines and therapeutic proteins. However, it cannot meet the increasing worldwide demand for pharmaceuticals due to its limited scalability and high cost. Plants have shown to be one of the most promising alternative pharmaceutical production platforms that are robust, scalable, low-cost and safe. The recent development of virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins in plants. To further optimize the utility of the transient expression system, we demonstrate a simple, efficient and scalable methodology to introduce target-gene containing Agrobacterium
into plant tissue in this study. Our results indicate that agroinfiltration with both syringe and vacuum methods have resulted in the efficient introduction of Agrobacterium
into leaves and robust production of two fluorescent proteins; GFP and DsRed. Furthermore, we demonstrate the unique advantages offered by both methods. Syringe infiltration is simple and does not need expensive equipment. It also allows the flexibility to either infiltrate the entire leave with one target gene, or to introduce genes of multiple targets on one leaf. Thus, it can be used for laboratory scale expression of recombinant proteins as well as for comparing different proteins or vectors for yield or expression kinetics. The simplicity of syringe infiltration also suggests its utility in high school and college education for the subject of biotechnology. In contrast, vacuum infiltration is more robust and can be scaled-up for commercial manufacture of pharmaceutical proteins. It also offers the advantage of being able to agroinfiltrate plant species that are not amenable for syringe infiltration such as lettuce and Arabidopsis
. Overall, the combination of syringe and vacuum agroinfiltration provides researchers and educators a simple, efficient, and robust methodology for transient protein expression. It will greatly facilitate the development of pharmaceutical proteins and promote science education.
Plant Biology, Issue 77, Genetics, Molecular Biology, Cellular Biology, Virology, Microbiology, Bioengineering, Plant Viruses, Antibodies, Monoclonal, Green Fluorescent Proteins, Plant Proteins, Recombinant Proteins, Vaccines, Synthetic, Virus-Like Particle, Gene Transfer Techniques, Gene Expression, Agroinfiltration, plant infiltration, plant-made pharmaceuticals, syringe agroinfiltration, vacuum agroinfiltration, monoclonal antibody, Agrobacterium tumefaciens, Nicotiana benthamiana, GFP, DsRed, geminiviral vectors, imaging, plant model
Analysis of Gene Expression in Emerald Ash Borer (Agrilus planipennis) Using Quantitative Real Time-PCR
Institutions: The Ohio State University.
Emerald ash borer (EAB, Agrilus planipennis
) is an exotic invasive pest, which has killed millions of ash trees (Fraxinus
spp) in North America.
EAB continues to spread rapidly and attacks ash trees of different ages, from saplings to mature trees. However, to date very little or no molecular knowledge exists for EAB. We are interested in deciphering the molecular-based physiological processes at the tissue level that aid EAB in successful colonization of ash trees. In this report we show the effective use of quantitative real-time PCR (qRT-PCR) to ascertain mRNA levels in different larval tissues (including midgut, fat bodies and cuticle) and different developmental stages (including 1st
-instars, prepupae and adults) of EAB. As an example, a peritrophin gene (herein named, AP-PERI1
) is exemplified as the gene of interest and a ribosomal protein (AP-RP1
) as the internal control. Peritrophins are important components of the peritrophic membrane/matrix (PM), which is the lining of the insect gut. The PM has diverse functions including digestion and mechanical protection to the midgut epithelium.
Cellular Biology, Issue 39, quantitative real time-PCR, peritrophin, emerald ash borer, gene expression
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism