Flies provide an important model for studying complex behavior due to the plethora of genetic tools available to researchers in this field. Studying locomotor behavior in Drosophila melanogaster relies on the ability to be able to quantify changes in motion during or in response to a given task. For this reason, a high-resolution video tracking system, such as the one we describe in this paper, is a valuable tool for measuring locomotion in real-time. Our protocol involves the use of an initial air pulse to break the flies momentum, followed by a thirty second filming period in a square chamber. A tracking program is then used to calculate the instantaneous speed of each fly within the chamber in 10 msec increments. Analysis software then compiles this data, and outputs a variety of parameters such as average speed, max speed, time spent in motion, acceleration, etc. This protocol will discuss proper feeding and management of flies for behavioral tasks, handling flies without anesthetization or immobilization, setting up a controlled environment, and running the assay from start to finish.
28 Related JoVE Articles!
Confocal Imaging of Confined Quiescent and Flowing Colloid-polymer Mixtures
Institutions: University of Houston.
The behavior of confined colloidal suspensions with attractive interparticle interactions is critical to the rational design of materials for directed assembly1-3
, drug delivery4
, improved hydrocarbon recovery5-7
, and flowable electrodes for energy storage8
. Suspensions containing fluorescent colloids and non-adsorbing polymers are appealing model systems, as the ratio of the polymer radius of gyration to the particle radius and concentration of polymer control the range and strength of the interparticle attraction, respectively. By tuning the polymer properties and the volume fraction of the colloids, colloid fluids, fluids of clusters, gels, crystals, and glasses can be obtained9
. Confocal microscopy, a variant of fluorescence microscopy, allows an optically transparent and fluorescent sample to be imaged with high spatial and temporal resolution in three dimensions. In this technique, a small pinhole or slit blocks the emitted fluorescent light from regions of the sample that are outside the focal volume of the microscope optical system. As a result, only a thin section of the sample in the focal plane is imaged. This technique is particularly well suited to probe the structure and dynamics in dense colloidal suspensions at the single-particle scale: the particles are large enough to be resolved using visible light and diffuse slowly enough to be captured at typical scan speeds of commercial confocal systems10
. Improvements in scan speeds and analysis algorithms have also enabled quantitative confocal imaging of flowing suspensions11-16,37
. In this paper, we demonstrate confocal microscopy experiments to probe the confined phase behavior and flow properties of colloid-polymer mixtures. We first prepare colloid-polymer mixtures that are density- and refractive-index matched. Next, we report a standard protocol for imaging quiescent dense colloid-polymer mixtures under varying confinement in thin wedge-shaped cells. Finally, we demonstrate a protocol for imaging colloid-polymer mixtures during microchannel flow.
Chemistry, Issue 87, confocal microscopy, particle tracking, colloids, suspensions, confinement, gelation, microfluidics, image correlation, dynamics, suspension flow
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Enhanced Reduced Representation Bisulfite Sequencing for Assessment of DNA Methylation at Base Pair Resolution
Institutions: Weill Cornell Medical College, Weill Cornell Medical College, Weill Cornell Medical College, University of Michigan.
DNA methylation pattern mapping is heavily studied in normal and diseased tissues. A variety of methods have been established to interrogate the cytosine methylation patterns in cells. Reduced representation of whole genome bisulfite sequencing was developed to detect quantitative base pair resolution cytosine methylation patterns at GC-rich genomic loci. This is accomplished by combining the use of a restriction enzyme followed by bisulfite conversion. Enhanced Reduced Representation Bisulfite Sequencing (ERRBS) increases the biologically relevant genomic loci covered and has been used to profile cytosine methylation in DNA from human, mouse and other organisms. ERRBS initiates with restriction enzyme digestion of DNA to generate low molecular weight fragments for use in library preparation. These fragments are subjected to standard library construction for next generation sequencing. Bisulfite conversion of unmethylated cytosines prior to the final amplification step allows for quantitative base resolution of cytosine methylation levels in covered genomic loci. The protocol can be completed within four days. Despite low complexity in the first three bases sequenced, ERRBS libraries yield high quality data when using a designated sequencing control lane. Mapping and bioinformatics analysis is then performed and yields data that can be easily integrated with a variety of genome-wide platforms. ERRBS can utilize small input material quantities making it feasible to process human clinical samples and applicable in a range of research applications. The video produced demonstrates critical steps of the ERRBS protocol.
Genetics, Issue 96, Epigenetics, bisulfite sequencing, DNA methylation, genomic DNA, 5-methylcytosine, high-throughput
Evaluation of Stem Cell Properties in Human Ovarian Carcinoma Cells Using Multi and Single Cell-based Spheres Assays
Institutions: University Hospital Basel, University Hospital Tübingen.
Years of research indicates that ovarian cancers harbor a heterogeneous mixture of cells including a subpopulation of so-called “cancer stem cells” (CSCs) responsible for tumor initiation, maintenance and relapse following conventional chemotherapies. Identification of ovarian CSCs is therefore an important goal. A commonly used method to assess CSC potential in vitro
is the spheres assay in which cells are plated under non-adherent culture conditions in serum-free medium supplemented with growth factors and sphere formation is scored after a few days. Here, we review currently available protocols for human ovarian cancer spheres assays and perform a side-by-side analysis between commonly used multi cell-based assays and a more accurate system based on single cell plating. Our results indicate that both multi cell-based as well as single cell-based spheres assays can be used to investigate sphere formation in vitro
. The more laborious and expensive single cell-based assays are more suitable for functional assessment of individual cells and lead to overall more accurate results while multi cell-based assays can be strongly influenced by the density of plated cells and require titration experiments upfront. Methylcellulose supplementation to multi cell-based assays can be effectively used to reduce mechanical artifacts.
Medicine, Issue 95, Cancer stem cells, spheres assay, ovarian, single cell, SOX2, in vitro assay, ovarian carcinoma
A Whole Cell Bioreporter Approach to Assess Transport and Bioavailability of Organic Contaminants in Water Unsaturated Systems
Institutions: Helmholtz Centre for Environmental Research - UFZ, Helmholtz Centre for Environmental Research - UFZ.
Bioavailability of contaminants is a prerequisite for their effective biodegradation in soil. The average bulk concentration of a contaminant, however, is not an appropriate measure for its availability; bioavailability rather depends on the dynamic interplay of potential mass transfer (flux) of a compound to a microbial cell and the capacity of the latter to degrade the compound. In water-unsaturated parts of the soil, mycelia have been shown to overcome bioavailability limitations by actively transporting and mobilizing organic compounds over the range of centimeters. Whereas the extent of mycelia-based transport can be quantified easily by chemical means, verification of the contaminant-bioavailability to bacterial cells requires a biological method. Addressing this constraint, we chose the PAH fluorene (FLU) as a model compound and developed a water unsaturated model microcosm linking a spatially separated FLU point source and the FLU degrading bioreporter bacterium Burkholderia sartisoli
RP037-mChe by a mycelial network of Pythium ultimum
. Since the bioreporter expresses eGFP in response of the PAH flux to the cell, bacterial FLU exposure and degradation could be monitored directly in the microcosms via confocal laser scanning microscopy (CLSM). CLSM and image analyses revealed a significant increase of the eGFP expression in the presence of P. ultimum
compared to controls without mycelia or FLU thus indicating FLU bioavailability to bacteria after mycelia-mediated transport. CLSM results were supported by chemical analyses in identical microcosms. The developed microcosm proved suitable to investigate contaminant bioavailability and to concomitantly visualize the involved bacteria-mycelial interactions.
Environmental Sciences, Issue 94, PAH, bioavailability, mycelia, translocation, volatility, bioreporter, CLSM, biodegradation, fluorene
Functional Reconstitution and Channel Activity Measurements of Purified Wildtype and Mutant CFTR Protein
Institutions: Hospital for Sick Children, University of Toronto, University of Toronto.
The Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) is a unique channel-forming member of the ATP Binding Cassette (ABC) superfamily of transporters. The phosphorylation and nucleotide dependent chloride channel activity of CFTR has been frequently studied in whole cell systems and as single channels in excised membrane patches. Many Cystic Fibrosis-causing mutations have been shown to alter this activity. While a small number of purification protocols have been published, a fast reconstitution method that retains channel activity and a suitable method for studying population channel activity in a purified system have been lacking. Here rapid methods are described for purification and functional reconstitution of the full-length CFTR protein into proteoliposomes of defined lipid composition that retains activity as a regulated halide channel. This reconstitution method together with a novel flux-based assay of channel activity is a suitable system for studying the population channel properties of wild type CFTR and the disease-causing mutants F508del- and G551D-CFTR. Specifically, the method has utility in studying the direct effects of phosphorylation, nucleotides and small molecules such as potentiators and inhibitors on CFTR channel activity. The methods are also amenable to the study of other membrane channels/transporters for anionic substrates.
Biochemistry, Issue 97, Cystic Fibrosis, CFTR, purification, reconstitution, chloride channel, channel function, iodide efflux, potentiation
Forward Genetics Screens Using Macrophages to Identify Toxoplasma gondii Genes Important for Resistance to IFN-γ-Dependent Cell Autonomous Immunity
Institutions: New York Medical College.
the causative agent of toxoplasmosis, is an obligate intracellular protozoan pathogen. The parasite invades and replicates within virtually any warm blooded vertebrate cell type. During parasite invasion of a host cell, the parasite creates a parasitophorous vacuole (PV) that originates from the host cell membrane independent of phagocytosis within which the parasite replicates. While IFN-dependent-innate and cell mediated immunity is important for eventual control of infection, innate immune cells, including neutrophils, monocytes and dendritic cells, can also serve as vehicles for systemic dissemination of the parasite early in infection. An approach is described that utilizes the host innate immune response, in this case macrophages, in a forward genetic screen to identify parasite mutants with a fitness defect in infected macrophages following activation but normal invasion and replication in naïve macrophages. Thus, the screen isolates parasite mutants that have a specific defect in their ability to resist the effects of macrophage activation. The paper describes two broad phenotypes of mutant parasites following activation of infected macrophages: parasite stasis versus parasite degradation, often in amorphous vacuoles. The parasite mutants are then analyzed to identify the responsible parasite genes specifically important for resistance to induced mediators of cell autonomous immunity. The paper presents a general approach for the forward genetics screen that, in theory, can be modified to target parasite genes important for resistance to specific antimicrobial mediators. It also describes an approach to evaluate the specific macrophage antimicrobial mediators to which the parasite mutant is susceptible. Activation of infected macrophages can also promote parasite differentiation from the tachyzoite to bradyzoite stage that maintains chronic infection. Therefore, methodology is presented to evaluate the importance of the identified parasite gene to establishment of chronic infection.
Immunology, Issue 97, Toxoplasma, macrophages, innate immunity, intracellular pathogen, immune evasion, infectious disease, forward genetics, parasite
Robotic Ablation of Atrial Fibrillation
Institutions: Charité — Universitätsmedizin Berlin, Campus Virchow, University Hospital Zurich.
Background: Pulmonary vein isolation (PVI) is an established treatment for atrial fibrillation (AF). During PVI an electrical conduction block between pulmonary vein (PV) and left atrium (LA) is created. This conduction block prevents AF, which is triggered by irregular electric activity originating from the PV. However, transmural atrial lesions are required which can be challenging. Re-conduction and AF recurrence occur in 20 - 40% of the cases. Robotic catheter systems aim to improve catheter steerability. Here, a procedure with a new remote catheter system (RCS), is presented. Objective of this article is to show feasibility of robotic AF ablation with a novel system. Materials and Methods: After interatrial trans-septal puncture is performed using a long sheath and needle under fluoroscopic guidance. The needle is removed and a guide wire is placed in the left superior PV. Then an ablation catheter is positioned in the LA, using the sheath and wire as guide to the LA. LA angiography is performed over the sheath. A circular mapping catheter is positioned via the long sheath into the LA and a three-dimensional (3-D) anatomical reconstruction of the LA is performed. The handle of the ablation catheter is positioned in the robotic arm of the Amigo system and the ablation procedure begins. During the ablation procedure, the operator manipulates the ablation catheter via the robotic arm with the use of a remote control. The ablation is performed by creating point-by-point lesions around the left and right PV ostia. Contact force is measured at the catheter tip to provide feedback of catheter-tissue contact. Conduction block is confirmed by recording the PV potentials on the circular mapping catheter and by pacing maneuvers. The operator stays out of the radiationfield during ablation. Conclusion: The novel catheter system allows ablation with high stability on low operator fluoroscopy exposure.
Medicine, Issue 99, Atrial fibrillation, catheter ablation, robotic ablation, remote navigation, fluoroscopy, radiation exposure, cardiac arrhythmia
Studying Pancreatic Cancer Stem Cell Characteristics for Developing New Treatment Strategies
Institutions: Spanish National Cancer Research Center, Institute for Research in Biomedicine (IRB Barcelona), Queen Mary University of London.
Pancreatic ductal adenocarcinoma (PDAC) contains a subset of exclusively tumorigenic cancer stem cells (CSCs) which have been shown to drive tumor initiation, metastasis and resistance to radio- and chemotherapy. Here we describe a specific methodology for culturing primary human pancreatic CSCs as tumor spheres in anchorage-independent conditions. Cells are grown in serum-free, non-adherent conditions in order to enrich for CSCs while their more differentiated progenies do not survive and proliferate during the initial phase following seeding of single cells. This assay can be used to estimate the percentage of CSCs present in a population of tumor cells. Both size (which can range from 35 to 250 micrometers) and number of tumor spheres formed represents CSC activity harbored in either bulk populations of cultured cancer cells or freshly harvested and digested tumors 1,2
. Using this assay, we recently found that metformin selectively ablates pancreatic CSCs; a finding that was subsequently further corroborated by demonstrating diminished expression of pluripotency-associated genes/surface markers and reduced in vivo
tumorigenicity of metformin-treated cells. As the final step for preclinical development we treated mice bearing established tumors with metformin and found significantly prolonged survival. Clinical studies testing the use of metformin in patients with PDAC are currently underway (e.g.,
NCT01210911, NCT01167738, and NCT01488552). Mechanistically, we found that metformin induces a fatal energy crisis in CSCs by enhancing reactive oxygen species (ROS) production and reducing mitochondrial transmembrane potential. In contrast, non-CSCs were not eliminated by metformin treatment, but rather underwent reversible cell cycle arrest. Therefore, our study serves as a successful example for the potential of in vitro
sphere formation as a screening tool to identify compounds that potentially target CSCs, but this technique will require further in vitro
and in vivo
validation to eliminate false discoveries.
Medicine, Issue 100, Pancreatic ductal adenocarcinoma, cancer stem cells, spheres, metformin (met), metabolism
Reduction of Iatrogenic Atrial Septal Defects with an Anterior and Inferior Transseptal Puncture Site when Operating the Cryoballoon Ablation Catheter
Institutions: Banner-University Medical Center, Mayo Clinic, Medtronic plc, Stanford University.
The cryoballoon catheter ablates atrial fibrillation (AF) triggers in the left atrium (LA) and pulmonary veins (PVs) via transseptal access. The typical transseptal puncture site is the fossa ovalis (FO) – the atrial septum’s thinnest section. A potentially beneficial transseptal site, for the cryoballoon, is near the inferior limbus (IL). This study examines an alternative transseptal site near the IL, which may decrease the frequency of acute iatrogenic atrial septal defect (IASD). Also, the study evaluates the acute pulmonary vein isolation (PVI) success rate utilizing the IL location. 200 patients were evaluated by retrospective chart review for acute PVI success rate with an IL transseptal site. An additional 128 IL transseptal patients were compared to 45 FO transseptal patients by performing Doppler intracardiac echocardiography (ICE) post-ablation to assess transseptal flow after removal of the transseptal sheath. After sheath removal and by Doppler ICE imaging, 42 of 128 (33%) IL transseptal patients demonstrated acute transseptal flow, while 45 of 45 (100%) FO transseptal puncture patients had acute transseptal flow. The difference in acute transseptal flow detection between FO and IL sites was statistically significant (P <0.0001). Furthermore, 186 of 200 patients (with an IL transseptal puncture) did not need additional ablation(s) and had achieved an acute PVI by a “cryoballoon only” technique. An IL transseptal puncture site for cryoballoon AF ablations is an effective location to mediate PVI at all four PVs. Additionally, an IL transseptal location can lower the incidence of acute transseptal flow by Doppler ICE when compared to the FO. Potentially, the IL transseptal site may reduce later IASD complications post-cryoballoon procedures.
Medicine, Issue 100, Atrial fibrillation, catheter ablation, cryoballoon, transseptal puncture, iatrogenic atrial septal defect
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO2
) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
Generation of Comprehensive Thoracic Oncology Database - Tool for Translational Research
Institutions: University of Chicago, University of Chicago, Northshore University Health Systems, University of Chicago, University of Chicago, University of Chicago.
The Thoracic Oncology Program Database Project was created to serve as a comprehensive, verified, and accessible repository for well-annotated cancer specimens and clinical data to be available to researchers within the Thoracic Oncology Research Program. This database also captures a large volume of genomic and proteomic data obtained from various tumor tissue studies. A team of clinical and basic science researchers, a biostatistician, and a bioinformatics expert was convened to design the database. Variables of interest were clearly defined and their descriptions were written within a standard operating manual to ensure consistency of data annotation. Using a protocol for prospective tissue banking and another protocol for retrospective banking, tumor and normal tissue samples from patients consented to these protocols were collected. Clinical information such as demographics, cancer characterization, and treatment plans for these patients were abstracted and entered into an Access database. Proteomic and genomic data have been included in the database and have been linked to clinical information for patients described within the database. The data from each table were linked using the relationships function in Microsoft Access to allow the database manager to connect clinical and laboratory information during a query. The queried data can then be exported for statistical analysis and hypothesis generation.
Medicine, Issue 47, Database, Thoracic oncology, Bioinformatics, Biorepository, Microsoft Access, Proteomics, Genomics
Absolute Quantum Yield Measurement of Powder Samples
Institutions: Hitachi High Technologies America.
Measurement of fluorescence quantum yield has become an important tool in the search for new solutions in the development, evaluation, quality control and research of illumination, AV equipment, organic EL material, films, filters and fluorescent probes for bio-industry.
Quantum yield is calculated as the ratio of the number of photons absorbed, to the number of photons emitted by a material. The higher the quantum yield, the better the efficiency of the fluorescent material.
For the measurements featured in this video, we will use the Hitachi F-7000 fluorescence spectrophotometer equipped with the Quantum Yield measuring accessory and Report Generator program. All the information provided applies to this system.
Measurement of quantum yield in powder samples is performed following these steps:
Generation of instrument correction factors for the excitation and emission monochromators. This is an important requirement for the correct measurement of quantum yield. It has been performed in advance for the full measurement range of the instrument and will not be shown in this video due to time limitations.
Measurement of integrating sphere correction factors. The purpose of this step is to take into consideration reflectivity characteristics of the integrating sphere used for the measurements.
Reference and Sample measurement using direct excitation and indirect excitation.
Quantum Yield calculation using Direct and Indirect excitation. Direct excitation is when the sample is facing directly the excitation beam, which would be the normal measurement setup. However, because we use an integrating sphere, a portion of the emitted photons resulting from the sample fluorescence are reflected by the integrating sphere and will re-excite the sample, so we need to take into consideration indirect excitation. This is accomplished by measuring the sample placed in the port facing the emission monochromator, calculating indirect quantum yield and correcting the direct quantum yield calculation.
Corrected quantum yield calculation.
Chromaticity coordinates calculation using Report Generator program.
The Hitachi F-7000 Quantum Yield Measurement System offer advantages for this
application, as follows:
High sensitivity (S/N ratio 800 or better RMS). Signal is the Raman band of water measured under the following conditions: Ex wavelength 350 nm, band pass Ex and Em 5 nm, response 2 sec), noise is measured at the maximum of the Raman peak. High sensitivity allows measurement of samples even with low quantum yield. Using this system we have measured quantum yields as low as 0.1 for a sample of salicylic acid and as high as 0.8 for a sample of magnesium tungstate.
Highly accurate measurement with a dynamic range of 6 orders of magnitude allows for measurements of both sharp scattering peaks with high intensity, as well as broad fluorescence peaks of low intensity under the same conditions.
High measuring throughput and reduced light exposure to the sample, due to a high scanning speed of up to 60,000 nm/minute and automatic shutter function.
Measurement of quantum yield over a wide wavelength range from 240 to 800 nm.
Accurate quantum yield measurements are the result of collecting instrument spectral response and integrating sphere correction factors before measuring the sample.
Large selection of calculated parameters provided by dedicated and easy to use software.
During this video we will measure sodium salicylate in powder form which is known to have a quantum yield value of 0.4 to 0.5.
Molecular Biology, Issue 63, Powders, Quantum, Yield, F-7000, Quantum Yield, phosphor, chromaticity, Photo-luminescence
Concentration of Metabolites from Low-density Planktonic Communities for Environmental Metabolomics using Nuclear Magnetic Resonance Spectroscopy
Institutions: RIKEN Advanced Science Institute, Yokohama City University, RIKEN Plant Science Center, Nagoya University.
Environmental metabolomics is an emerging field that is promoting new understanding in how organisms respond to and interact with the environment and each other at the biochemical level1
. Nuclear magnetic resonance (NMR) spectroscopy is one of several technologies, including gas chromatography–mass spectrometry (GC-MS), with considerable promise for such studies. Advantages of NMR are that it is suitable for untargeted analyses, provides structural information and spectra can be queried in quantitative and statistical manners against recently available databases of individual metabolite spectra2,3
. In addition, NMR spectral data can be combined with data from other omics levels (e.g. transcriptomics, genomics) to provide a more comprehensive understanding of the physiological responses of taxa to each other and the environment4,5,6
. However, NMR is less sensitive than other metabolomic techniques, making it difficult to apply to natural microbial systems where sample populations can be low-density and metabolite concentrations low compared to metabolites from well-defined and readily extractable sources such as whole tissues, biofluids or cell-cultures. Consequently, the few direct environmental metabolomic studies of microbes performed to date have been limited to culture-based or easily defined high-density ecosystems such as host-symbiont systems, constructed co-cultures or manipulations of the gut environment where stable isotope labeling can be additionally used to enhance NMR signals7,8,9,10,11,12
. Methods that facilitate the concentration and collection of environmental metabolites at concentrations suitable for NMR are lacking. Since recent attention has been given to the environmental metabolomics of organisms within the aquatic environment, where much of the energy and material flow is mediated by the planktonic community13,14
, we have developed a method for the concentration and extraction of whole-community metabolites from planktonic microbial systems by filtration. Commercially available hydrophilic poly-1,1-difluoroethene (PVDF) filters are specially treated to completely remove extractables, which can otherwise appear as contaminants in subsequent analyses. These treated filters are then used to filter environmental or experimental samples of interest. Filters containing the wet sample material are lyophilized and aqueous-soluble metabolites are extracted directly for conventional NMR spectroscopy using a standardized potassium phosphate extraction buffer2
. Data derived from these methods can be analyzed statistically to identify meaningful patterns, or integrated with other omics levels for comprehensive understanding of community and ecosystem function.
Molecular Biology, Issue 62, environmental metabolomics, metabolic profiling, microbial ecology, plankton, NMR spectroscopy, PCA
Lensless Fluorescent Microscopy on a Chip
Institutions: University of California, Los Angeles .
On-chip lensless imaging in general aims to replace bulky lens-based optical microscopes with simpler and more compact designs, especially for high-throughput screening applications. This emerging technology platform has the potential to eliminate the need for bulky and/or costly optical components through the help of novel theories and digital reconstruction algorithms. Along the same lines, here we demonstrate an on-chip fluorescent microscopy modality that can achieve e.g., <4μm spatial resolution over an ultra-wide field-of-view (FOV) of >0.6-8 cm2
without the use of any lenses, mechanical-scanning or thin-film based interference filters. In this technique, fluorescent excitation is achieved through a prism or hemispherical-glass interface illuminated by an incoherent source. After interacting with the entire object volume, this excitation light is rejected by total-internal-reflection (TIR) process that is occurring at the bottom of the sample micro-fluidic chip. The fluorescent emission from the excited objects is then collected by a fiber-optic faceplate or a taper and is delivered to an optoelectronic sensor array such as a charge-coupled-device (CCD). By using a compressive-sampling based decoding algorithm, the acquired lensfree raw fluorescent images of the sample can be rapidly processed to yield e.g., <4μm resolution over an FOV of >0.6-8 cm2
. Moreover, vertically stacked micro-channels that are separated by e.g., 50-100 μm can also be successfully imaged using the same lensfree on-chip microscopy platform, which further increases the overall throughput of this modality. This compact on-chip fluorescent imaging platform, with a rapid compressive decoder behind it, could be rather valuable for high-throughput cytometry, rare-cell research and microarray-analysis.
Bioengineering, Issue 54, Lensless Microscopy, Fluorescent On-chip Imaging, Wide-field Microscopy, On-Chip Cytometry, Compressive Sampling/Sensing
Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation
Institutions: Piedmont Heart Institute, Medtronic Inc..
Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion.
Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible.
We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training.
Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast.
Medicine, Issue 72, Anatomy, Physiology, Cardiology, Biomedical Engineering, Surgery, Cardiovascular System, Cardiovascular Diseases, Surgical Procedures, Operative, Investigative Techniques, Atrial fibrillation, Cryoballoon Ablation, Pulmonary Vein Occlusion, Pulmonary Vein Isolation, electrophysiology, catheterizatoin, heart, vein, clinical, surgical device, surgical techniques
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1
. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation.
Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5
). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7
. Binary defocusing DFP methods can achieve even greater speeds8
Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9
, facial animation10
, cardiac mechanics studies11
, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Experimental Measurement of Settling Velocity of Spherical Particles in Unconfined and Confined Surfactant-based Shear Thinning Viscoelastic Fluids
Institutions: The University of Texas at Austin.
An experimental study is performed to measure the terminal settling velocities of spherical particles in surfactant based shear thinning viscoelastic (VES) fluids. The measurements are made for particles settling in unbounded fluids and fluids between parallel walls. VES fluids over a wide range of rheological properties are prepared and rheologically characterized. The rheological characterization involves steady shear-viscosity and dynamic oscillatory-shear measurements to quantify the viscous and elastic properties respectively. The settling velocities under unbounded conditions are measured in beakers having diameters at least 25x the diameter of particles. For measuring settling velocities between parallel walls, two experimental cells with different wall spacing are constructed. Spherical particles of varying sizes are gently dropped in the fluids and allowed to settle. The process is recorded with a high resolution video camera and the trajectory of the particle is recorded using image analysis software. Terminal settling velocities are calculated from the data.
The impact of elasticity on settling velocity in unbounded fluids is quantified by comparing the experimental settling velocity to the settling velocity calculated by the inelastic drag predictions of Renaud et al.1
Results show that elasticity of fluids can increase or decrease the settling velocity. The magnitude of reduction/increase is a function of the rheological properties of the fluids and properties of particles. Confining walls are observed to cause a retardation effect on settling and the retardation is measured in terms of wall factors.
Physics, Issue 83, chemical engineering, settling velocity, Reynolds number, shear thinning, wall retardation
A Practical Guide to Phylogenetics for Nonexperts
Institutions: The George Washington University.
Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
Basic Protocol, Issue 84, phylogenetics, multiple sequence alignments, phylogenetic tree, BLAST executables, basic local alignment search tool, Bayesian models
Quantitative Optical Microscopy: Measurement of Cellular Biophysical Features with a Standard Optical Microscope
Institutions: Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine, Oregon Health & Science University, School of Medicine.
We describe the use of a standard optical microscope to perform quantitative measurements of mass, volume, and density on cellular specimens through a combination of bright field and differential interference contrast imagery. Two primary approaches are presented: noninterferometric quantitative phase microscopy (NIQPM), to perform measurements of total cell mass and subcellular density distribution, and Hilbert transform differential interference contrast microscopy (HTDIC) to determine volume. NIQPM is based on a simplified model of wave propagation, termed the paraxial approximation, with three underlying assumptions: low numerical aperture (NA) illumination, weak scattering, and weak absorption of light by the specimen. Fortunately, unstained cellular specimens satisfy these assumptions and low NA illumination is easily achieved on commercial microscopes. HTDIC is used to obtain volumetric information from through-focus DIC imagery under high NA illumination conditions. High NA illumination enables enhanced sectioning of the specimen along the optical axis. Hilbert transform processing on the DIC image stacks greatly enhances edge detection algorithms for localization of the specimen borders in three dimensions by separating the gray values of the specimen intensity from those of the background. The primary advantages of NIQPM and HTDIC lay in their technological accessibility using “off-the-shelf” microscopes. There are two basic limitations of these methods: slow z-stack acquisition time on commercial scopes currently abrogates the investigation of phenomena faster than 1 frame/minute, and secondly, diffraction effects restrict the utility of NIQPM and HTDIC to objects from 0.2 up to 10 (NIQPM) and 20 (HTDIC) μm in diameter, respectively. Hence, the specimen and its associated time dynamics of interest must meet certain size and temporal constraints to enable the use of these methods. Excitingly, most fixed cellular specimens are readily investigated with these methods.
Bioengineering, Issue 86, Label-free optics, quantitative microscopy, cellular biophysics, cell mass, cell volume, cell density
Phage Phenomics: Physiological Approaches to Characterize Novel Viral Proteins
Institutions: San Diego State University, San Diego State University, San Diego State University, San Diego State University, San Diego State University, Argonne National Laboratory, Broad Institute.
Current investigations into phage-host interactions are dependent on extrapolating knowledge from (meta)genomes. Interestingly, 60 - 95% of all phage sequences share no homology to current annotated proteins. As a result, a large proportion of phage genes are annotated as hypothetical. This reality heavily affects the annotation of both structural and auxiliary metabolic genes. Here we present phenomic methods designed to capture the physiological response(s) of a selected host during expression of one of these unknown phage genes. Multi-phenotype Assay Plates (MAPs) are used to monitor the diversity of host substrate utilization and subsequent biomass formation, while metabolomics provides bi-product analysis by monitoring metabolite abundance and diversity. Both tools are used simultaneously to provide a phenotypic profile associated with expression of a single putative phage open reading frame (ORF). Representative results for both methods are compared, highlighting the phenotypic profile differences of a host carrying either putative structural or metabolic phage genes. In addition, the visualization techniques and high throughput computational pipelines that facilitated experimental analysis are presented.
Immunology, Issue 100, phenomics, phage, viral metagenome, Multi-phenotype Assay Plates (MAPs), continuous culture, metabolomics