Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
20 Related JoVE Articles!
The ITS2 Database
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1
and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8
The ITS2 Database9
presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11
. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12
(direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13
. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold.
The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14
search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16
for multiple sequence-structure alignment calculation and Neighbor Joining18
tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure.
In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga
, which creates entry points for infection by one of the Neonectria
species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1
. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2
, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
Technique for Studying Arthropod and Microbial Communities within Tree Tissues
Institutions: Northern Arizona University, Acoustic Ecology Institute.
Phloem tissues of pine are habitats for many thousands of organisms. Arthropods and microbes use phloem and cambium tissues to seek mates, lay eggs, rear young, feed, or hide from natural enemies or harsh environmental conditions outside of the tree. Organisms that persist within the phloem habitat are difficult to observe given their location under bark. We provide a technique to preserve intact phloem and prepare it for experimentation with invertebrates and microorganisms. The apparatus is called a ‘phloem sandwich’ and allows for the introduction and observation of arthropods, microbes, and other organisms. This technique has resulted in a better understanding of the feeding behaviors, life-history traits, reproduction, development, and interactions of organisms within tree phloem. The strengths of this technique include the use of inexpensive materials, variability in sandwich size, flexibility to re-open the sandwich or introduce multiple organisms through drilled holes, and the preservation and maintenance of phloem integrity. The phloem sandwich is an excellent educational tool for scientific discovery in both K-12 science courses and university research laboratories.
Environmental Sciences, Issue 93, phloem sandwich, pine, bark beetles, mites, acoustics, phloem
A Venturi Effect Can Help Cure Our Trees
Institutions: Unversity of Padova.
In woody plants, xylem sap moves upwards through the vessels due to a decreasing gradient of water potential from the groundwater to the foliage. According to these factors and their dynamics, small amounts of sap-compatible liquids (i.e.
pesticides) can be injected into the xylem system, reaching their target from inside. This endotherapic method, called "trunk injection" or "trunk infusion" (depending on whether the user supplies an external pressure or not), confines the applied chemicals only within the target tree, thereby making it particularly useful in urban situations. The main factors limiting wider use of the traditional drilling methods are related to negative side effects of the holes that must be drilled around the trunk circumference in order to gain access to the xylem vessels beneath the bark.
The University of Padova (Italy) recently developed a manual, drill-free instrument with a small, perforated blade that enters the trunk by separating the woody fibers with minimal friction. Furthermore, the lenticular shaped blade reduces the vessels' cross section, increasing sap velocity and allowing the natural uptake of an external liquid up to the leaves, when transpiration rate is substantial. Ports partially close soon after the removal of the blade due to the natural elasticity and turgidity of the plant tissues, and the cambial activity completes the healing process in few weeks.
Environmental Sciences, Issue 80, Trunk injection, systemic injection, xylematic injection, endotherapy, sap flow, Bernoulli principle, plant diseases, pesticides, desiccants
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3
. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4
Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling.
The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5
involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc.
Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6
and density volume rendering7
. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
A Comparative Approach to Characterize the Landscape of Host-Pathogen Protein-Protein Interactions
Institutions: Institut Pasteur , Université Sorbonne Paris Cité, Dana Farber Cancer Institute.
Significant efforts were gathered to generate large-scale comprehensive protein-protein interaction network maps. This is instrumental to understand the pathogen-host relationships and was essentially performed by genetic screenings in yeast two-hybrid systems. The recent improvement of protein-protein interaction detection by a Gaussia
luciferase-based fragment complementation assay now offers the opportunity to develop integrative comparative interactomic approaches necessary to rigorously compare interaction profiles of proteins from different pathogen strain variants against a common set of cellular factors.
This paper specifically focuses on the utility of combining two orthogonal methods to generate protein-protein interaction datasets: yeast two-hybrid (Y2H) and a new assay, high-throughput Gaussia princeps
protein complementation assay (HT-GPCA) performed in mammalian cells.
A large-scale identification of cellular partners of a pathogen protein is performed by mating-based yeast two-hybrid screenings of cDNA libraries using multiple pathogen strain variants. A subset of interacting partners selected on a high-confidence statistical scoring is further validated in mammalian cells for pair-wise interactions with the whole set of pathogen variants proteins using HT-GPCA. This combination of two complementary methods improves the robustness of the interaction dataset, and allows the performance of a stringent comparative interaction analysis. Such comparative interactomics constitute a reliable and powerful strategy to decipher any pathogen-host interplays.
Immunology, Issue 77, Genetics, Microbiology, Biochemistry, Molecular Biology, Cellular Biology, Biomedical Engineering, Infection, Cancer Biology, Virology, Medicine, Host-Pathogen Interactions, Host-Pathogen Interactions, Protein-protein interaction, High-throughput screening, Luminescence, Yeast two-hybrid, HT-GPCA, Network, protein, yeast, cell, culture
Extracting DNA from the Gut Microbes of the Termite (Zootermopsis Angusticollis) and Visualizing Gut Microbes
Institutions: California Institute of Technology - Caltech.
Termites are among the few animals known to have the capacity to subsist solely by consuming wood. The termite gut tract contains a dense and species-rich microbial population that assists in the degradation of lignocellulose predominantly into acetate, the key nutrient fueling termite metabolism (Odelson & Breznak, 1983). Within these microbial populations are bacteria, methanogenic archaea and, in some ("lower") termites, eukaryotic protozoa. Thus, termites are excellent research subjects for studying the interactions among microbial species and the numerous biochemical functions they perform to the benefit of their host. The species composition of microbial populations in termite guts as well as key genes involved in various biochemical processes has been explored using molecular techniques (Kudo et al., 1998; Schmit-Wagner et al., 2003; Salmassi & Leadbetter, 2003). These techniques depend on the extraction and purification of high-quality nucleic acids from the termite gut environment. The extraction technique described in this video is a modified compilation of protocols developed for extraction and purification of nucleic acids from environmental samples (Mor et al., 1994; Berthelet et al., 1996; Purdy et al., 1996; Salmassi & Leadbetter, 2003; Ottesen et al. 2006) and it produces DNA from termite hindgut material suitable for use as template for polymerase chain reaction (PCR).
Microbiology, issue 4, microbial community, DNA, extraction, gut, termite
Automated Interactive Video Playback for Studies of Animal Communication
Institutions: Texas A&M University (TAMU), Texas A&M University (TAMU).
Video playback is a widely-used technique for the controlled manipulation and presentation of visual signals in animal communication. In particular, parameter-based computer animation offers the opportunity to independently manipulate any number of behavioral, morphological, or spectral characteristics in the context of realistic, moving images of animals on screen. A major limitation of conventional playback, however, is that the visual stimulus lacks the ability to interact with the live animal. Borrowing from video-game technology, we have created an automated, interactive system for video playback that controls animations in response to real-time signals from a video tracking system. We demonstrated this method by conducting mate-choice trials on female swordtail fish, Xiphophorus birchmanni
. Females were given a simultaneous choice between a courting male conspecific and a courting male heterospecific (X. malinche
) on opposite sides of an aquarium. The virtual male stimulus was programmed to track the horizontal position of the female, as courting males do in the wild. Mate-choice trials on wild-caught X. birchmanni
females were used to validate the prototype's ability to effectively generate a realistic visual stimulus.
Neuroscience, Issue 48, Computer animation, visual communication, mate choice, Xiphophorus birchmanni, tracking
Adjustable Stiffness, External Fixator for the Rat Femur Osteotomy and Segmental Bone Defect Models
Institutions: Queensland University of Technology, RISystem AG.
The mechanical environment around the healing of broken bone is very important as it determines the way the fracture will heal. Over the past decade there has been great clinical interest in improving bone healing by altering the mechanical environment through the fixation stability around the lesion. One constraint of preclinical animal research in this area is the lack of experimental control over the local mechanical environment within a large segmental defect as well as osteotomies as they heal. In this paper we report on the design and use of an external fixator to study the healing of large segmental bone defects or osteotomies. This device not only allows for controlled axial stiffness on the bone lesion as it heals, but it also enables the change of stiffness during the healing process in vivo.
The conducted experiments have shown that the fixators were able to maintain a 5 mm femoral defect gap in rats in vivo
during unrestricted cage activity for at least 8 weeks. Likewise, we observed no distortion or infections, including pin infections during the entire healing period. These results demonstrate that our newly developed external fixator was able to achieve reproducible and standardized stabilization, and the alteration of the mechanical environment of in vivo
rat large bone defects and various size osteotomies. This confirms that the external fixation device is well suited for preclinical research investigations using a rat model in the field of bone regeneration and repair.
Medicine, Issue 92, external fixator, bone healing, small animal model, large bone defect and osteotomy model, rat model, mechanical environment, mechanobiology.
The Analysis of Purkinje Cell Dendritic Morphology in Organotypic Slice Cultures
Institutions: University of Basel.
Purkinje cells are an attractive model system for studying dendritic development, because they have an impressive dendritic tree which is strictly oriented in the sagittal plane and develops mostly in the postnatal period in small rodents 3
. Furthermore, several antibodies are available which selectively and intensively label Purkinje cells including all processes, with anti-Calbindin D28K being the most widely used. For viewing of dendrites in living cells, mice expressing EGFP selectively in Purkinje cells 11
are available through Jackson labs. Organotypic cerebellar slice cultures cells allow easy experimental manipulation of Purkinje cell dendritic development because most of the dendritic expansion of the Purkinje cell dendritic tree is actually taking place during the culture period 4
. We present here a short, reliable and easy protocol for viewing and analyzing the dendritic morphology of Purkinje cells grown in organotypic cerebellar slice cultures. For many purposes, a quantitative evaluation of the Purkinje cell dendritic tree is desirable. We focus here on two parameters, dendritic tree size and branch point numbers, which can be rapidly and easily determined from anti-calbindin stained cerebellar slice cultures. These two parameters yield a reliable and sensitive measure of changes of the Purkinje cell dendritic tree. Using the example of treatments with the protein kinase C (PKC) activator PMA and the metabotropic glutamate receptor 1 (mGluR1) we demonstrate how differences in the dendritic development are visualized and quantitatively assessed. The combination of the presence of an extensive dendritic tree, selective and intense immunostaining methods, organotypic slice cultures which cover the period of dendritic growth and a mouse model with Purkinje cell specific EGFP expression make Purkinje cells a powerful model system for revealing the mechanisms of dendritic development.
Neuroscience, Issue 61, dendritic development, dendritic branching, cerebellum, Purkinje cells
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Institutions: Georgia Health Sciences University, Georgia Health Sciences University, Georgia Health Sciences University, Palo Alto Research Center, Palo Alto Research Center, University of Minnesota .
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1
. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes
) with such properties2
Many innovative and useful methods currently exist for creating novel objects and object categories3-6
(also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings.
First, shape variations are generally imposed by the experimenter5,9,10
, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints.
Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13
. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases.
Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms.
Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14
. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13
. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16
. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13
. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper.
We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have.
Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Neuroscience, Issue 69, machine learning, brain, classification, category learning, cross-modal perception, 3-D prototyping, inference
Rapid and Low-cost Prototyping of Medical Devices Using 3D Printed Molds for Liquid Injection Molding
Institutions: University of California, San Francisco, University of California, San Francisco, University of Southern California.
Biologically inert elastomers such as silicone are favorable materials for medical device fabrication, but forming and curing these elastomers using traditional liquid injection molding processes can be an expensive process due to tooling and equipment costs. As a result, it has traditionally been impractical to use liquid injection molding for low-cost, rapid prototyping applications. We have devised a method for rapid and low-cost production of liquid elastomer injection molded devices that utilizes fused deposition modeling 3D printers for mold design and a modified desiccator as an injection system. Low costs and rapid turnaround time in this technique lower the barrier to iteratively designing and prototyping complex elastomer devices. Furthermore, CAD models developed in this process can be later adapted for metal mold tooling design, enabling an easy transition to a traditional injection molding process. We have used this technique to manufacture intravaginal probes involving complex geometries, as well as overmolding over metal parts, using tools commonly available within an academic research laboratory. However, this technique can be easily adapted to create liquid injection molded devices for many other applications.
Bioengineering, Issue 88, liquid injection molding, reaction injection molding, molds, 3D printing, fused deposition modeling, rapid prototyping, medical devices, low cost, low volume, rapid turnaround time.
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Visualizing Neuroblast Cytokinesis During C. elegans Embryogenesis
Institutions: Concordia University.
This protocol describes the use of fluorescence microscopy to image dividing cells within developing Caenorhabditis elegans
embryos. In particular, this protocol focuses on how to image dividing neuroblasts, which are found underneath the epidermal cells and may be important for epidermal morphogenesis. Tissue formation is crucial for metazoan development and relies on external cues from neighboring tissues. C. elegans
is an excellent model organism to study tissue morphogenesis in vivo
due to its transparency and simple organization, making its tissues easy to study via microscopy. Ventral enclosure is the process where the ventral surface of the embryo is covered by a single layer of epithelial cells. This event is thought to be facilitated by the underlying neuroblasts, which provide chemical guidance cues to mediate migration of the overlying epithelial cells. However, the neuroblasts are highly proliferative and also may act as a mechanical substrate for the ventral epidermal cells. Studies using this experimental protocol could uncover the importance of intercellular communication during tissue formation, and could be used to reveal the roles of genes involved in cell division within developing tissues.
Neuroscience, Issue 85, C. elegans, morphogenesis, cytokinesis, neuroblasts, anillin, microscopy, cell division
Analysis of Gene Expression in Emerald Ash Borer (Agrilus planipennis) Using Quantitative Real Time-PCR
Institutions: The Ohio State University.
Emerald ash borer (EAB, Agrilus planipennis
) is an exotic invasive pest, which has killed millions of ash trees (Fraxinus
spp) in North America.
EAB continues to spread rapidly and attacks ash trees of different ages, from saplings to mature trees. However, to date very little or no molecular knowledge exists for EAB. We are interested in deciphering the molecular-based physiological processes at the tissue level that aid EAB in successful colonization of ash trees. In this report we show the effective use of quantitative real-time PCR (qRT-PCR) to ascertain mRNA levels in different larval tissues (including midgut, fat bodies and cuticle) and different developmental stages (including 1st
-instars, prepupae and adults) of EAB. As an example, a peritrophin gene (herein named, AP-PERI1
) is exemplified as the gene of interest and a ribosomal protein (AP-RP1
) as the internal control. Peritrophins are important components of the peritrophic membrane/matrix (PM), which is the lining of the insect gut. The PM has diverse functions including digestion and mechanical protection to the midgut epithelium.
Cellular Biology, Issue 39, quantitative real time-PCR, peritrophin, emerald ash borer, gene expression
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism