Zebrafish have become a widely used model organism to investigate the mechanisms that underlie developmental biology and to study human disease pathology due to their considerable degree of genetic conservation with humans. Chemical genetics entails testing the effect that small molecules have on a biological process and is becoming a popular translational research method to identify therapeutic compounds. Zebrafish are specifically appealing to use for chemical genetics because of their ability to produce large clutches of transparent embryos, which are externally fertilized. Furthermore, zebrafish embryos can be easily drug treated by the simple addition of a compound to the embryo media. Using whole-mount in situ hybridization (WISH), mRNA expression can be clearly visualized within zebrafish embryos. Together, using chemical genetics and WISH, the zebrafish becomes a potent whole organism context in which to determine the cellular and physiological effects of small molecules. Innovative advances have been made in technologies that utilize machine-based screening procedures, however for many labs such options are not accessible or remain cost-prohibitive. The protocol described here explains how to execute a manual high-throughput chemical genetic screen that requires basic resources and can be accomplished by a single individual or small team in an efficient period of time. Thus, this protocol provides a feasible strategy that can be implemented by research groups to perform chemical genetics in zebrafish, which can be useful for gaining fundamental insights into developmental processes, disease mechanisms, and to identify novel compounds and signaling pathways that have medically relevant applications.
28 Related JoVE Articles!
Genome Editing with CompoZr Custom Zinc Finger Nucleases (ZFNs)
Institutions: Sigma Life Science.
Genome editing is a powerful technique that can be used to elucidate gene function and the genetic basis of disease. Traditional gene editing methods such as chemical-based mutagenesis or random integration of DNA sequences confer indiscriminate genetic changes in an overall inefficient manner and require incorporation of undesirable synthetic sequences or use of aberrant culture conditions, potentially confusing biological study. By contrast, transient ZFN expression in a cell can facilitate precise, heritable gene editing in a highly efficient manner without the need for administration of chemicals or integration of synthetic transgenes.
Zinc finger nucleases (ZFNs) are enzymes which bind and cut distinct sequences of double-stranded DNA (dsDNA). A functional CompoZr ZFN unit consists of two individual monomeric proteins that bind a DNA "half-site" of approximately 15-18 nucleotides (see Figure 1
). When two ZFN monomers "home" to their adjacent target sites the DNA-cleavage domains dimerize and create a double-strand break (DSB) in the DNA.1
Introduction of ZFN-mediated DSBs in the genome lays a foundation for highly efficient genome editing. Imperfect repair of DSBs in a cell via the non-homologous end-joining (NHEJ) DNA repair pathway can result in small insertions and deletions (indels). Creation of indels within the gene coding sequence of a cell can result in frameshift and subsequent functional knockout of a gene locus at high efficiency.2
While this protocol describes the use of ZFNs to create a gene knockout, integration of transgenes may also be conducted via homology-directed repair at the ZFN cut site.
The CompoZr Custom ZFN Service represents a systematic, comprehensive, and well-characterized approach to targeted gene editing for the scientific community with ZFN technology. Sigma scientists work closely with investigators to 1) perform due diligence analysis including analysis of relevant gene structure, biology, and model system pursuant to the project goals, 2) apply this knowledge to develop a sound targeting strategy, 3) then design, build, and functionally validate ZFNs for activity in a relevant cell line. The investigator receives positive control genomic DNA and primers, and ready-to-use ZFN reagents supplied in both plasmid DNA and in-vitro transcribed mRNA format. These reagents may then be delivered for transient expression in the investigator’s cell line or cell type of choice. Samples are then tested for gene editing at the locus of interest by standard molecular biology techniques including PCR amplification, enzymatic digest, and electrophoresis. After positive signal for gene editing is detected in the initial population, cells are single-cell cloned and genotyped for identification of mutant clones/alleles.
Genetics, Issue 64, Molecular Biology, Zinc Finger Nuclease, Genome Engineering, Genomic Editing, Gene Modification, Gene Knockout, Gene Integration, non-homologous end joining, homologous recombination, targeted genome editing
DNA Methylation: Bisulphite Modification and Analysis
Institutions: Garvan Institute of Medical Research, University of NSW.
Epigenetics describes the heritable changes in gene function that occur independently to the DNA sequence. The molecular basis of epigenetic gene regulation is complex, but essentially involves modifications to the DNA itself or the proteins with which DNA associates. The predominant epigenetic modification of DNA in mammalian genomes is methylation of cytosine nucleotides (5-MeC). DNA methylation provides instruction to gene expression machinery as to where and when the gene should be expressed. The primary target sequence for DNA methylation in mammals is 5'-CpG-3' dinucleotides (Figure 1). CpG dinucleotides are not uniformly distributed throughout the genome, but are concentrated in regions of repetitive genomic sequences and CpG "islands" commonly associated with gene promoters (Figure 1). DNA methylation patterns are established early in development, modulated during tissue specific differentiation and disrupted in many disease states including cancer. To understand the biological role of DNA methylation and its role in human disease, precise, efficient and reproducible methods are required to detect and quantify individual 5-MeCs.
This protocol for bisulphite conversion is the "gold standard" for DNA methylation analysis and facilitates identification and quantification of DNA methylation at single nucleotide resolution. The chemistry of cytosine deamination by sodium bisulphite involves three steps (Figure 2). (1) Sulphonation: The addition of bisulphite to the 5-6 double bond of cytosine (2) Hydrolic Deamination: hydrolytic deamination of the resulting cytosine-bisulphite derivative to give a uracil-bisulphite derivative (3) Alkali Desulphonation: Removal of the sulphonate group by an alkali treatment, to give uracil. Bisulphite preferentially deaminates cytosine to uracil in single stranded DNA, whereas 5-MeC, is refractory to bisulphite-mediated deamination. Upon PCR amplification, uracil is amplified as thymine while 5-MeC residues remain as cytosines, allowing methylated CpGs to be distinguished from unmethylated CpGs by presence of a cytosine "C" versus thymine "T" residue during sequencing.
DNA modification by bisulphite conversion is a well-established protocol that can be exploited for many methods of DNA methylation analysis. Since the detection of 5-MeC by bisulphite conversion was first demonstrated by Frommer et al.1
and Clark et al.2
, methods based around bisulphite conversion of genomic DNA account for the majority of new data on DNA methylation. Different methods of post PCR analysis may be utilized, depending on the degree of specificity and resolution of methylation required. Cloning and sequencing is still the most readily available method that can give single nucleotide resolution for methylation across the DNA molecule.
Genetics, Issue 56, epigenetics, DNA methylation, Bisulphite, 5-methylcytosine (5-MeC), PCR
Selective Capture of 5-hydroxymethylcytosine from Genomic DNA
Institutions: Emory University School of Medicine, The University of Chicago.
5-methylcytosine (5-mC) constitutes ~2-8% of the total cytosines in human genomic DNA and impacts a broad range of biological functions, including gene expression, maintenance of genome integrity, parental imprinting, X-chromosome inactivation, regulation of development, aging, and cancer1
. Recently, the presence of an oxidized 5-mC, 5-hydroxymethylcytosine (5-hmC), was discovered in mammalian cells, in particular in embryonic stem (ES) cells and neuronal cells2-4
. 5-hmC is generated by oxidation of 5-mC catalyzed by TET family iron (II)/α-ketoglutarate-dependent dioxygenases2, 3
. 5-hmC is proposed to be involved in the maintenance of embryonic stem (mES) cell, normal hematopoiesis and malignancies, and zygote development2, 5-10
. To better understand the function of 5-hmC, a reliable and straightforward sequencing system is essential. Traditional bisulfite sequencing cannot distinguish 5-hmC from 5-mC11
. To unravel the biology of 5-hmC, we have developed a highly efficient and selective chemical approach to label and capture 5-hmC, taking advantage of a bacteriophage enzyme that adds a glucose moiety to 5-hmC specifically12
Here we describe a straightforward two-step procedure for selective chemical labeling of 5-hmC. In the first labeling step, 5-hmC in genomic DNA is labeled with a 6-azide-glucose catalyzed by β-GT, a glucosyltransferase from T4 bacteriophage, in a way that transfers the 6-azide-glucose to 5-hmC from the modified cofactor, UDP-6-N3-Glc (6-N3UDPG). In the second step, biotinylation, a disulfide biotin linker is attached to the azide group by click chemistry. Both steps are highly specific and efficient, leading to complete labeling regardless of the abundance of 5-hmC in genomic regions and giving extremely low background. Following biotinylation of 5-hmC, the 5-hmC-containing DNA fragments are then selectively captured using streptavidin beads in a density-independent manner. The resulting 5-hmC-enriched DNA fragments could be used for downstream analyses, including next-generation sequencing.
Our selective labeling and capture protocol confers high sensitivity, applicable to any source of genomic DNA with variable/diverse 5-hmC abundances. Although the main purpose of this protocol is its downstream application (i.e
., next-generation sequencing to map out the 5-hmC distribution in genome), it is compatible with single-molecule, real-time SMRT (DNA) sequencing, which is capable of delivering single-base resolution sequencing of 5-hmC.
Genetics, Issue 68, Chemistry, Biophysics, 5-Hydroxymethylcytosine, chemical labeling, genomic DNA, high-throughput sequencing
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
High-throughput Screening for Small-molecule Modulators of Inward Rectifier Potassium Channels
Institutions: Vanderbilt University School of Medicine, Vanderbilt University School of Medicine, Vanderbilt University School of Medicine.
Specific members of the inward rectifier potassium (Kir) channel family are postulated drug targets for a variety of disorders, including hypertension, atrial fibrillation, and pain1,2
. For the most part, however, progress toward understanding their therapeutic potential or even basic physiological functions has been slowed by the lack of good pharmacological tools. Indeed, the molecular pharmacology of the inward rectifier family has lagged far behind that of the S4 superfamily of voltage-gated potassium (Kv) channels, for which a number of nanomolar-affinity and highly selective peptide toxin modulators have been discovered3
. The bee venom toxin tertiapin and its derivatives are potent inhibitors of Kir1.1 and Kir3 channels4,5
, but peptides are of limited use therapeutically as well as experimentally due to their antigenic properties and poor bioavailability, metabolic stability and tissue penetrance. The development of potent and selective small-molecule probes with improved pharmacological properties will be a key to fully understanding the physiology and therapeutic potential of Kir channels.
The Molecular Libraries Probes Production Center Network (MLPCN) supported by the National Institutes of Health (NIH) Common Fund has created opportunities for academic scientists to initiate probe discovery campaigns for molecular targets and signaling pathways in need of better pharmacology6
. The MLPCN provides researchers access to industry-scale screening centers and medicinal chemistry and informatics support to develop small-molecule probes to elucidate the function of genes and gene networks. The critical step in gaining entry to the MLPCN is the development of a robust target- or pathway-specific assay that is amenable for high-throughput screening (HTS).
Here, we describe how to develop a fluorescence-based thallium (Tl+
) flux assay of Kir channel function for high-throughput compound screening7,8,9,10
.The assay is based on the permeability of the K+
channel pore to the K+
. A commercially available fluorescent Tl+
reporter dye is used to detect transmembrane flux of Tl+
through the pore. There are at least three commercially available dyes that are suitable for Tl+
flux assays: BTC, FluoZin-2, and FluxOR7,8
. This protocol describes assay development using FluoZin-2. Although originally developed and marketed as a zinc indicator, FluoZin-2 exhibits a robust and dose-dependent increase in fluorescence emission upon Tl+
binding. We began working with FluoZin-2 before FluxOR was available7,8
and have continued to do so9,10
. However, the steps in assay development are essentially identical for all three dyes, and users should determine which dye is most appropriate for their specific needs. We also discuss the assay's performance benchmarks that must be reached to be considered for entry to the MLPCN. Since Tl+
readily permeates most K+
channels, the assay should be adaptable to most K+
Biochemistry, Issue 71, Molecular Biology, Chemistry, Cellular Biology, Chemical Biology, Pharmacology, Molecular Pharmacology, Potassium channels, drug discovery, drug screening, high throughput, small molecules, fluorescence, thallium flux, checkerboard analysis, DMSO, cell lines, screen, assay, assay development
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli)
is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment.
Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli
cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
Mapping Bacterial Functional Networks and Pathways in Escherichia Coli using Synthetic Genetic Arrays
Institutions: University of Toronto, University of Toronto, University of Regina.
Phenotypes are determined by a complex series of physical (e.g.
protein-protein) and functional (e.g.
gene-gene or genetic) interactions (GI)1
. While physical interactions can indicate which bacterial proteins are associated as complexes, they do not necessarily reveal pathway-level functional relationships1. GI screens, in which the growth of double mutants bearing two deleted or inactivated genes is measured and compared to the corresponding single mutants, can illuminate epistatic dependencies between loci and hence provide a means to query and discover novel functional relationships2
. Large-scale GI maps have been reported for eukaryotic organisms like yeast3-7
, but GI information remains sparse for prokaryotes8
, which hinders the functional annotation of bacterial genomes. To this end, we and others have developed high-throughput quantitative bacterial GI screening methods9, 10
Here, we present the key steps required to perform quantitative E. coli
Synthetic Genetic Array (eSGA) screening procedure on a genome-scale9
, using natural bacterial conjugation and homologous recombination to systemically generate and measure the fitness of large numbers of double mutants in a colony array format.
Briefly, a robot is used to transfer, through conjugation, chloramphenicol (Cm) - marked mutant alleles from engineered Hfr (High frequency of recombination) 'donor strains' into an ordered array of kanamycin (Kan) - marked F- recipient strains. Typically, we use loss-of-function single mutants bearing non-essential gene deletions (e.g.
the 'Keio' collection11
) and essential gene hypomorphic mutations (i.e.
alleles conferring reduced protein expression, stability, or activity9, 12, 13
) to query the functional associations of non-essential and essential genes, respectively. After conjugation and ensuing genetic exchange mediated by homologous recombination, the resulting double mutants are selected on solid medium containing both antibiotics. After outgrowth, the plates are digitally imaged and colony sizes are quantitatively scored using an in-house automated image processing system14
. GIs are revealed when the growth rate of a double mutant is either significantly better or worse than expected9
. Aggravating (or negative) GIs often result between loss-of-function mutations in pairs of genes from compensatory pathways that impinge on the same essential process2
. Here, the loss of a single gene is buffered, such that either single mutant is viable. However, the loss of both pathways is deleterious and results in synthetic lethality or sickness (i.e.
slow growth). Conversely, alleviating (or positive) interactions can occur between genes in the same pathway or protein complex2
as the deletion of either gene alone is often sufficient to perturb the normal function of the pathway or complex such that additional perturbations do not reduce activity, and hence growth, further. Overall, systematically identifying and analyzing GI networks can provide unbiased, global maps of the functional relationships between large numbers of genes, from which pathway-level information missed by other approaches can be inferred9
Genetics, Issue 69, Molecular Biology, Medicine, Biochemistry, Microbiology, Aggravating, alleviating, conjugation, double mutant, Escherichia coli, genetic interaction, Gram-negative bacteria, homologous recombination, network, synthetic lethality or sickness, suppression
Competitive Genomic Screens of Barcoded Yeast Libraries
Institutions: University of Toronto, University of Toronto, University of Toronto, National Human Genome Research Institute, NIH, Stanford University , University of Toronto.
By virtue of advances in next generation sequencing technologies, we have access to new genome sequences almost daily. The tempo of these advances is accelerating, promising greater depth and breadth. In light of these extraordinary advances, the need for fast, parallel methods to define gene function becomes ever more important. Collections of genome-wide deletion mutants in yeasts and E. coli
have served as workhorses for functional characterization of gene function, but this approach is not scalable, current gene-deletion approaches require each of the thousands of genes that comprise a genome to be deleted and verified. Only after this work is complete can we pursue high-throughput phenotyping. Over the past decade, our laboratory has refined a portfolio of competitive, miniaturized, high-throughput genome-wide assays that can be performed in parallel. This parallelization is possible because of the inclusion of DNA 'tags', or 'barcodes,' into each mutant, with the barcode serving as a proxy for the mutation and one can measure the barcode abundance to assess mutant fitness. In this study, we seek to fill the gap between DNA sequence and barcoded mutant collections. To accomplish this we introduce a combined transposon disruption-barcoding approach that opens up parallel barcode assays to newly sequenced, but poorly characterized microbes. To illustrate this approach we present a new Candida albicans
barcoded disruption collection and describe how both microarray-based and next generation sequencing-based platforms can be used to collect 10,000 - 1,000,000 gene-gene and drug-gene interactions in a single experiment.
Biochemistry, Issue 54, chemical biology, chemogenomics, chemical probes, barcode microarray, next generation sequencing
Rapid Analysis and Exploration of Fluorescence Microscopy Images
Institutions: UT Southwestern Medical Center, UT Southwestern Medical Center, Princeton University.
Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard.
Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens.
Basic Protocol, Issue 85, PhenoRipper, fluorescence microscopy, image analysis, High-content analysis, high-throughput screening, Open-source, Phenotype
Quantitative Analysis of Chromatin Proteomes in Disease
Institutions: David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, David Geffen School of Medicine at UCLA, Nora Eccles Harrison Cardiovascular Research and Training Institute, University of Utah.
In the nucleus reside the proteomes whose functions are most intimately linked with gene regulation. Adult mammalian cardiomyocyte nuclei are unique due to the high percentage of binucleated cells,1
the predominantly heterochromatic state of the DNA, and the non-dividing nature of the cardiomyocyte which renders adult nuclei in a permanent state of interphase.2
Transcriptional regulation during development and disease have been well studied in this organ,3-5
but what remains relatively unexplored is the role played by the nuclear proteins responsible for DNA packaging and expression, and how these proteins control changes in transcriptional programs that occur during disease.6
In the developed world, heart disease is the number one cause of mortality for both men and women.7
Insight on how nuclear proteins cooperate to regulate the progression of this disease is critical for advancing the current treatment options.
Mass spectrometry is the ideal tool for addressing these questions as it allows for an unbiased annotation of the nuclear proteome and relative quantification for how the abundance of these proteins changes with disease. While there have been several proteomic studies for mammalian nuclear protein complexes,8-13
there has been only one study examining the cardiac nuclear proteome, and it considered the entire nucleus, rather than exploring the proteome at the level of nuclear sub compartments.15
In large part, this shortage of work is due to the difficulty of isolating cardiac nuclei. Cardiac nuclei occur within a rigid and dense actin-myosin apparatus to which they are connected via multiple extensions from the endoplasmic reticulum, to the extent that myocyte contraction alters their overall shape.16
Additionally, cardiomyocytes are 40% mitochondria by volume17
which necessitates enrichment of the nucleus apart from the other organelles. Here we describe a protocol for cardiac nuclear enrichment and further fractionation into biologically-relevant compartments. Furthermore, we detail methods for label-free quantitative mass spectrometric dissection of these fractions-techniques amenable to in vivo
experimentation in various animal models and organ systems where metabolic labeling is not feasible.
Medicine, Issue 70, Molecular Biology, Immunology, Genetics, Genomics, Physiology, Protein, DNA, Chromatin, cardiovascular disease, proteomics, mass spectrometry
Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity
Institutions: Case Western Reserve University.
Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition.
Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch
A Comparative Approach to Characterize the Landscape of Host-Pathogen Protein-Protein Interactions
Institutions: Institut Pasteur , Université Sorbonne Paris Cité, Dana Farber Cancer Institute.
Significant efforts were gathered to generate large-scale comprehensive protein-protein interaction network maps. This is instrumental to understand the pathogen-host relationships and was essentially performed by genetic screenings in yeast two-hybrid systems. The recent improvement of protein-protein interaction detection by a Gaussia
luciferase-based fragment complementation assay now offers the opportunity to develop integrative comparative interactomic approaches necessary to rigorously compare interaction profiles of proteins from different pathogen strain variants against a common set of cellular factors.
This paper specifically focuses on the utility of combining two orthogonal methods to generate protein-protein interaction datasets: yeast two-hybrid (Y2H) and a new assay, high-throughput Gaussia princeps
protein complementation assay (HT-GPCA) performed in mammalian cells.
A large-scale identification of cellular partners of a pathogen protein is performed by mating-based yeast two-hybrid screenings of cDNA libraries using multiple pathogen strain variants. A subset of interacting partners selected on a high-confidence statistical scoring is further validated in mammalian cells for pair-wise interactions with the whole set of pathogen variants proteins using HT-GPCA. This combination of two complementary methods improves the robustness of the interaction dataset, and allows the performance of a stringent comparative interaction analysis. Such comparative interactomics constitute a reliable and powerful strategy to decipher any pathogen-host interplays.
Immunology, Issue 77, Genetics, Microbiology, Biochemistry, Molecular Biology, Cellular Biology, Biomedical Engineering, Infection, Cancer Biology, Virology, Medicine, Host-Pathogen Interactions, Host-Pathogen Interactions, Protein-protein interaction, High-throughput screening, Luminescence, Yeast two-hybrid, HT-GPCA, Network, protein, yeast, cell, culture
Genetic Manipulation in Δku80 Strains for Functional Genomic Analysis of Toxoplasma gondii
Institutions: The Geisel School of Medicine at Dartmouth.
Targeted genetic manipulation using homologous recombination is the method of choice for functional genomic analysis to obtain a detailed view of gene function and phenotype(s). The development of mutant strains with targeted gene deletions, targeted mutations, complemented gene function, and/or tagged genes provides powerful strategies to address gene function, particularly if these genetic manipulations can be efficiently targeted to the gene locus of interest using integration mediated by double cross over homologous recombination.
Due to very high rates of nonhomologous recombination, functional genomic analysis of Toxoplasma gondii
has been previously limited by the absence of efficient methods for targeting gene deletions and gene replacements to specific genetic loci. Recently, we abolished the major pathway of nonhomologous recombination in type I and type II strains of T. gondii
by deleting the gene encoding the KU80 protein1,2
. The Δku80
strains behave normally during tachyzoite (acute) and bradyzoite (chronic) stages in vitro
and in vivo
and exhibit essentially a 100% frequency of homologous recombination. The Δku80
strains make functional genomic studies feasible on the single gene as well as on the genome scale1-4
Here, we report methods for using type I and type II Δku80Δhxgprt
strains to advance gene targeting approaches in T. gondii
. We outline efficient methods for generating gene deletions, gene replacements, and tagged genes by targeted insertion or deletion of the hypoxanthine-xanthine-guanine phosphoribosyltransferase (HXGPRT
) selectable marker. The described gene targeting protocol can be used in a variety of ways in Δku80
strains to advance functional analysis of the parasite genome and to develop single strains that carry multiple targeted genetic manipulations. The application of this genetic method and subsequent phenotypic assays will reveal fundamental and unique aspects of the biology of T. gondii
and related significant human pathogens that cause malaria (Plasmodium
sp.) and cryptosporidiosis (Cryptosporidium
Infectious Diseases, Issue 77, Genetics, Microbiology, Infection, Medicine, Immunology, Molecular Biology, Cellular Biology, Biomedical Engineering, Bioengineering, Genomics, Parasitology, Pathology, Apicomplexa, Coccidia, Toxoplasma, Genetic Techniques, Gene Targeting, Eukaryota, Toxoplasma gondii, genetic manipulation, gene targeting, gene deletion, gene replacement, gene tagging, homologous recombination, DNA, sequencing
Profiling of Estrogen-regulated MicroRNAs in Breast Cancer Cells
Institutions: University of Houston.
Estrogen plays vital roles in mammary gland development and breast cancer progression. It mediates its function by binding to and activating the estrogen receptors (ERs), ERα, and ERβ. ERα is frequently upregulated in breast cancer and drives the proliferation of breast cancer cells. The ERs function as transcription factors and regulate gene expression. Whereas ERα's regulation of protein-coding genes is well established, its regulation of noncoding microRNA (miRNA) is less explored. miRNAs play a major role in the post-transcriptional regulation of genes, inhibiting their translation or degrading their mRNA. miRNAs can function as oncogenes or tumor suppressors and are also promising biomarkers. Among the miRNA assays available, microarray and quantitative real-time polymerase chain reaction (qPCR) have been extensively used to detect and quantify miRNA levels. To identify miRNAs regulated by estrogen signaling in breast cancer, their expression in ERα-positive breast cancer cell lines were compared before and after estrogen-activation using both the µParaflo-microfluidic microarrays and Dual Labeled Probes-low density arrays. Results were validated using specific qPCR assays, applying both Cyanine dye-based and Dual Labeled Probes-based chemistry. Furthermore, a time-point assay was used to identify regulations over time. Advantages of the miRNA assay approach used in this study is that it enables a fast screening of mature miRNA regulations in numerous samples, even with limited sample amounts. The layout, including the specific conditions for cell culture and estrogen treatment, biological and technical replicates, and large-scale screening followed by in-depth confirmations using separate techniques, ensures a robust detection of miRNA regulations, and eliminates false positives and other artifacts. However, mutated or unknown miRNAs, or regulations at the primary and precursor transcript level, will not be detected. The method presented here represents a thorough investigation of estrogen-mediated miRNA regulation.
Medicine, Issue 84, breast cancer, microRNA, estrogen, estrogen receptor, microarray, qPCR
Inhibitory Synapse Formation in a Co-culture Model Incorporating GABAergic Medium Spiny Neurons and HEK293 Cells Stably Expressing GABAA Receptors
Institutions: University College London.
Inhibitory neurons act in the central nervous system to regulate the dynamics and spatio-temporal co-ordination of neuronal networks. GABA (γ-aminobutyric acid) is the predominant inhibitory neurotransmitter in the brain. It is released from the presynaptic terminals of inhibitory neurons within highly specialized intercellular junctions known as synapses, where it binds to GABAA
Rs) present at the plasma membrane of the synapse-receiving, postsynaptic neurons. Activation of these GABA-gated ion channels leads to influx of chloride resulting in postsynaptic potential changes that decrease the probability that these neurons will generate action potentials.
During development, diverse types of inhibitory neurons with distinct morphological, electrophysiological and neurochemical characteristics have the ability to recognize their target neurons and form synapses which incorporate specific GABAA
Rs subtypes. This principle of selective innervation of neuronal targets raises the question as to how the appropriate synaptic partners identify each other.
To elucidate the underlying molecular mechanisms, a novel in vitro
co-culture model system was established, in which medium spiny GABAergic neurons, a highly homogenous population of neurons isolated from the embryonic striatum, were cultured with stably transfected HEK293 cell lines that express different GABAA
R subtypes. Synapses form rapidly, efficiently and selectively in this system, and are easily accessible for quantification. Our results indicate that various GABAA
R subtypes differ in their ability to promote synapse formation, suggesting that this reduced in vitro
model system can be used to reproduce, at least in part, the in vivo
conditions required for the recognition of the appropriate synaptic partners and formation of specific synapses. Here the protocols for culturing the medium spiny neurons and generating HEK293 cells lines expressing GABAA
Rs are first described, followed by detailed instructions on how to combine these two cell types in co-culture and analyze the formation of synaptic contacts.
Neuroscience, Issue 93, Developmental neuroscience, synaptogenesis, synaptic inhibition, co-culture, stable cell lines, GABAergic, medium spiny neurons, HEK 293 cell line
Comprehensive Analysis of Transcription Dynamics from Brain Samples Following Behavioral Experience
Institutions: The Hebrew University of Jerusalem.
The encoding of experiences in the brain and the consolidation of long-term memories depend on gene transcription. Identifying the function of specific genes in encoding experience is one of the main objectives of molecular neuroscience. Furthermore, the functional association of defined genes with specific behaviors has implications for understanding the basis of neuropsychiatric disorders. Induction of robust transcription programs has been observed in the brains of mice following various behavioral manipulations. While some genetic elements are utilized recurrently following different behavioral manipulations and in different brain nuclei, transcriptional programs are overall unique to the inducing stimuli and the structure in which they are studied1,2
In this publication, a protocol is described for robust and comprehensive transcriptional profiling from brain nuclei of mice in response to behavioral manipulation. The protocol is demonstrated in the context of analysis of gene expression dynamics in the nucleus accumbens following acute cocaine experience. Subsequent to a defined in vivo
experience, the target neural tissue is dissected; followed by RNA purification, reverse transcription and utilization of microfluidic arrays for comprehensive qPCR analysis of multiple target genes. This protocol is geared towards comprehensive analysis (addressing 50-500 genes) of limiting quantities of starting material, such as small brain samples or even single cells.
The protocol is most advantageous for parallel analysis of multiple samples (e.g.
single cells, dynamic analysis following pharmaceutical, viral or behavioral perturbations). However, the protocol could also serve for the characterization and quality assurance of samples prior to whole-genome studies by microarrays or RNAseq, as well as validation of data obtained from whole-genome studies.
Behavior, Issue 90,
Brain, behavior, RNA, transcription, nucleus accumbens, cocaine, high-throughput qPCR, experience-dependent plasticity, gene regulatory networks, microdissection
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Gibberella zeae Ascospore Production and Collection for Microarray Experiments.
Institutions: USDA, University of Minnesota/ Agroinnova, University of Torino, University of Minnesota.
Fusarium graminearum Schwabe (teleomorph Gibberella zeae) is a plant pathogen causing scab disease on wheat and barley that reduces crop yield and grain quality. F. graminearum also causes stalk and ear rots of maize and is a producer of mycotoxins such as the trichothecenes that contaminate grain and are harmful to humans and livestock (Goswami and Kistler, 2004).
The fungus produces two types of spores. Ascospores, the propagules resulting from sexual reproduction, are the main source of primary infection. These spores are forcibly discharged from mature perithecia and dispersed by wind (Francl et al 1999). Secondary infections are mainly caused by macroconidia which are produced by asexual means on the plant surface. To study the developmental processes of ascospores in this fungus, a procedure for their collection in large quantity under sterile conditions was required. Our protocol was filmed in order to generate the highest level of information for understanding and reproducibility; crucial aspects when full genome gene expression profiles are generated and interpreted. In particular, the variability of ascospore germination and biological activity are dependent on the prior manipulation of the material. The use of video for documenting every step in ascospore production is proposed in order to increase standardization, complying with the increasingly stringent requirements for microarray analysis. The procedure requires only standard laboratory equipment. Steps are shown to prevent contamination and favor time synchronization of ascospores.
Plant Biology, Issue 1, sexual cross, spore separation, MIAME standards
Modeling Astrocytoma Pathogenesis In Vitro and In Vivo Using Cortical Astrocytes or Neural Stem Cells from Conditional, Genetically Engineered Mice
Institutions: University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, University of North Carolina School of Medicine, Emory University School of Medicine, University of North Carolina School of Medicine.
Current astrocytoma models are limited in their ability to define the roles of oncogenic mutations in specific brain cell types during disease pathogenesis and their utility for preclinical drug development. In order to design a better model system for these applications, phenotypically wild-type cortical astrocytes and neural stem cells (NSC) from conditional, genetically engineered mice (GEM) that harbor various combinations of floxed oncogenic alleles were harvested and grown in culture. Genetic recombination was induced in vitro
using adenoviral Cre-mediated recombination, resulting in expression of mutated oncogenes and deletion of tumor suppressor genes. The phenotypic consequences of these mutations were defined by measuring proliferation, transformation, and drug response in vitro
. Orthotopic allograft models, whereby transformed cells are stereotactically injected into the brains of immune-competent, syngeneic littermates, were developed to define the role of oncogenic mutations and cell type on tumorigenesis in vivo
. Unlike most established human glioblastoma cell line xenografts, injection of transformed GEM-derived cortical astrocytes into the brains of immune-competent littermates produced astrocytomas, including the most aggressive subtype, glioblastoma, that recapitulated the histopathological hallmarks of human astrocytomas, including diffuse invasion of normal brain parenchyma. Bioluminescence imaging of orthotopic allografts from transformed astrocytes engineered to express luciferase was utilized to monitor in vivo
tumor growth over time. Thus, astrocytoma models using astrocytes and NSC harvested from GEM with conditional oncogenic alleles provide an integrated system to study the genetics and cell biology of astrocytoma pathogenesis in vitro
and in vivo
and may be useful in preclinical drug development for these devastating diseases.
Neuroscience, Issue 90, astrocytoma, cortical astrocytes, genetically engineered mice, glioblastoma, neural stem cells, orthotopic allograft
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
In Situ Neutron Powder Diffraction Using Custom-made Lithium-ion Batteries
Institutions: University of Sydney, University of Wollongong, Australian Synchrotron, Australian Nuclear Science and Technology Organisation, University of Wollongong, University of New South Wales.
Li-ion batteries are widely used in portable electronic devices and are considered as promising candidates for higher-energy applications such as electric vehicles.1,2
However, many challenges, such as energy density and battery lifetimes, need to be overcome before this particular battery technology can be widely implemented in such applications.3
This research is challenging, and we outline a method to address these challenges using in situ
NPD to probe the crystal structure of electrodes undergoing electrochemical cycling (charge/discharge) in a battery. NPD data help determine the underlying structural mechanism responsible for a range of electrode properties, and this information can direct the development of better electrodes and batteries.
We briefly review six types of battery designs custom-made for NPD experiments and detail the method to construct the ‘roll-over’ cell that we have successfully used on the high-intensity NPD instrument, WOMBAT, at the Australian Nuclear Science and Technology Organisation (ANSTO). The design considerations and materials used for cell construction are discussed in conjunction with aspects of the actual in situ
NPD experiment and initial directions are presented on how to analyze such complex in situ
Physics, Issue 93, In operando, structure-property relationships, electrochemical cycling, electrochemical cells, crystallography, battery performance
A Strategy to Identify de Novo Mutations in Common Disorders such as Autism and Schizophrenia
Institutions: Universite de Montreal, Universite de Montreal, Universite de Montreal.
There are several lines of evidence supporting the role of de novo
mutations as a mechanism for common disorders, such as autism and schizophrenia. First, the de novo
mutation rate in humans is relatively high, so new mutations are generated at a high frequency in the population. However, de novo
mutations have not been reported in most common diseases. Mutations in genes leading to severe diseases where there is a strong negative selection against the phenotype, such as lethality in embryonic stages or reduced reproductive fitness, will not be transmitted to multiple family members, and therefore will not be detected by linkage gene mapping or association studies. The observation of very high concordance in monozygotic twins and very low concordance in dizygotic twins also strongly supports the hypothesis that a significant fraction of cases may result from new mutations. Such is the case for diseases such as autism and schizophrenia. Second, despite reduced reproductive fitness1
and extremely variable environmental factors, the incidence of some diseases is maintained worldwide at a relatively high and constant rate. This is the case for autism and schizophrenia, with an incidence of approximately 1% worldwide. Mutational load can be thought of as a balance between selection for or against a deleterious mutation and its production by de novo
mutation. Lower rates of reproduction constitute a negative selection factor that should reduce the number of mutant alleles in the population, ultimately leading to decreased disease prevalence. These selective pressures tend to be of different intensity in different environments. Nonetheless, these severe mental disorders have been maintained at a constant relatively high prevalence in the worldwide population across a wide range of cultures and countries despite a strong negative selection against them2
. This is not what one would predict in diseases with reduced reproductive fitness, unless there was a high new mutation rate. Finally, the effects of paternal age: there is a significantly increased risk of the disease with increasing paternal age, which could result from the age related increase in paternal de novo
mutations. This is the case for autism and schizophrenia3
. The male-to-female ratio of mutation rate is estimated at about 4–6:1, presumably due to a higher number of germ-cell divisions with age in males. Therefore, one would predict that de novo
mutations would more frequently come from males, particularly older males4
. A high rate of new mutations may in part explain why genetic studies have so far failed to identify many genes predisposing to complexes diseases genes, such as autism and schizophrenia, and why diseases have been identified for a mere 3% of genes in the human genome. Identification for de novo
mutations as a cause of a disease requires a targeted molecular approach, which includes studying parents and affected subjects. The process for determining if the genetic basis of a disease may result in part from de novo
mutations and the molecular approach to establish this link will be illustrated, using autism and schizophrenia as examples.
Medicine, Issue 52, de novo mutation, complex diseases, schizophrenia, autism, rare variations, DNA sequencing
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism
BioMEMS and Cellular Biology: Perspectives and Applications
Institutions: University of Washington.
The ability to culture cells has revolutionized hypothesis testing in basic cell and molecular biology research. It has become a standard methodology in drug screening, toxicology, and clinical assays, and is increasingly used in regenerative medicine. However, the traditional cell culture methodology essentially consisting of the immersion of a large population of cells in a homogeneous fluid medium and on a homogeneous flat substrate has become increasingly limiting both from a fundamental and practical perspective. Microfabrication technologies have enabled researchers to design, with micrometer control, the biochemical composition and topology of the substrate, and the medium composition, as well as the neighboring cell type in the surrounding cellular microenvironment. Additionally, microtechnology is conceptually well-suited for the development of fast, low-cost in vitro systems that allow for high-throughput culturing and analysis of cells under large numbers of conditions. In this interview, Albert Folch explains these limitations, how they can be overcome with soft lithography and microfluidics, and describes some relevant examples of research in his lab and future directions.
Biomedical Engineering, Issue 8, BioMEMS, Soft Lithography, Microfluidics, Agrin, Axon Guidance, Olfaction, Interview
Preventing the Spread of Malaria and Dengue Fever Using Genetically Modified Mosquitoes
Institutions: University of California, Irvine (UCI).
In this candid interview, Anthony A. James explains how mosquito genetics can be exploited to control malaria and dengue transmission. Population replacement strategy, the idea that transgenic mosquitoes can be released into the wild to control disease transmission, is introduced, as well as the concept of genetic drive and the design criterion for an effective genetic drive system. The ethical considerations of releasing genetically-modified organisms into the wild are also discussed.
Cellular Biology, Issue 5, mosquito, malaria, dengue fever, genetics, infectious disease, Translational Research
Interview: Protein Folding and Studies of Neurodegenerative Diseases
Institutions: MIT - Massachusetts Institute of Technology.
In this interview, Dr. Lindquist describes relationships between protein folding, prion diseases and neurodegenerative disorders. The problem of the protein folding is at the core of the modern biology. In addition to their traditional biochemical functions, proteins can mediate transfer of biological information and therefore can be considered a genetic material. This recently discovered function of proteins has important implications for studies of human disorders. Dr. Lindquist also describes current experimental approaches to investigate the mechanism of neurodegenerative diseases based on genetic studies in model organisms.
Neuroscience, issue 17, protein folding, brain, neuron, prion, neurodegenerative disease, yeast, screen, Translational Research
Isolation of Genomic DNA from Mouse Tails
Institutions: University of California, Irvine (UCI).
Basic Protocols, Issue 6, genomic, DNA, genotyping, mouse