Maize is a major cereal crop worldwide. However, susceptibility to biotrophic pathogens is the primary constraint to increasing productivity. U. maydis is a biotrophic fungal pathogen and the causal agent of corn smut on maize. This disease is responsible for significant yield losses of approximately $1.0 billion annually in the U.S.1 Several methods including crop rotation, fungicide application and seed treatments are currently used to control corn smut2. However, host resistance is the only practical method for managing corn smut. Identification of crop plants including maize, wheat, and rice that are resistant to various biotrophic pathogens has significantly decreased yield losses annually3-5. Therefore, the use of a pathogen inoculation method that efficiently and reproducibly delivers the pathogen in between the plant leaves, would facilitate the rapid identification of maize lines that are resistant to U. maydis. As, a first step toward indentifying maize lines that are resistant to U. maydis, a needle injection inoculation method and a resistance reaction screening method was utilized to inoculate maize, teosinte, and maize x teosinte introgression lines with a U. maydis strain and to select resistant plants.
Maize, teosinte and maize x teosinte introgression lines, consisting of about 700 plants, were planted, inoculated with a strain of U. maydis, and screened for resistance. The inoculation and screening methods successfully identified three teosinte lines resistant to U. maydis. Here a detailed needle injection inoculation and resistance reaction screening protocol for maize, teosinte, and maize x teosinte introgression lines is presented. This study demonstrates that needle injection inoculation is an invaluable tool in agriculture that can efficiently deliver U. maydis in between the plant leaves and has provided plant lines that are resistant to U. maydis that can now be combined and tested in breeding programs for improved disease resistance.
20 Related JoVE Articles!
Bioassays for Monitoring Insecticide Resistance
Institutions: University of Missouri, Delta Research Center, Louisiana State University Agricultural Center.
Pest resistance to pesticides is an increasing problem because pesticides are an integral part of high-yielding production agriculture. When few products are labeled for an individual pest within a particular crop system, chemical control options are limited. Therefore, the same product(s) are used repeatedly and continual selection pressure is placed on the target pest. There are both financial and environmental costs associated with the development of resistant populations. The cost of pesticide resistance has been estimated at approximately $ 1.5 billion annually in the United States. This paper will describe protocols, currently used to monitor arthropod (specifically insects) populations for the development of resistance. The adult vial test is used to measure the toxicity to contact insecticides and a modification of this test is used for plant-systemic insecticides. In these bioassays, insects are exposed to technical grade insecticide and responses (mortality) recorded at a specific post-exposure interval. The mortality data are subjected to Log Dose probit analysis to generate estimates of a lethal concentration that provides mortality to 50% (LC50
) of the target populations and a series of confidence limits (CL's) as estimates of data variability. When these data are collected for a range of insecticide-susceptible populations, the LC50
can be used as baseline data for future monitoring purposes. After populations have been exposed to products, the results can be compared to a previously determined LC50
using the same methodology.
Microbiology, Issue 46, Resistance monitoring, Insecticide Resistance, Pesticide Resistance, glass-vial bioassay
Terahertz Microfluidic Sensing Using a Parallel-plate Waveguide Sensor
Institutions: Rice University .
Refractive index (RI) sensing is a powerful noninvasive and label-free sensing technique for the identification, detection and monitoring of microfluidic samples with a wide range of possible sensor designs such as interferometers and resonators 1,2
. Most of the existing RI sensing applications focus on biological materials in aqueous solutions in visible and IR frequencies, such as DNA hybridization and genome sequencing. At terahertz frequencies, applications include quality control, monitoring of industrial processes and sensing and detection applications involving nonpolar materials.
Several potential designs for refractive index sensors in the terahertz regime exist, including photonic crystal waveguides 3
, asymmetric split-ring resonators 4
, and photonic band gap structures integrated into parallel-plate waveguides 5
. Many of these designs are based on optical resonators such as rings or cavities. The resonant frequencies of these structures are dependent on the refractive index of the material in or around the resonator. By monitoring the shifts in resonant frequency the refractive index of a sample can be accurately measured and this in turn can be used to identify a material, monitor contamination or dilution, etc.
The sensor design we use here is based on a simple parallel-plate waveguide 6,7
. A rectangular groove machined into one face acts as a resonant cavity (Figures 1 and 2
). When terahertz radiation is coupled into the waveguide and propagates in the lowest-order transverse-electric (TE1
) mode, the result is a single strong resonant feature with a tunable resonant frequency that is dependent on the geometry of the groove 6,8
. This groove can be filled with nonpolar liquid microfluidic samples which cause a shift in the observed resonant frequency that depends on the amount of liquid in the groove and its refractive index 9
Our technique has an advantage over other terahertz techniques in its simplicity, both in fabrication and implementation, since the procedure can be accomplished with standard laboratory equipment without the need for a clean room or any special fabrication or experimental techniques. It can also be easily expanded to multichannel operation by the incorporation of multiple grooves 10
. In this video we will describe our complete experimental procedure, from the design of the sensor to the data analysis and determination of the sample refractive index.
Physics, Issue 66, Electrical Engineering, Computer Engineering, Terahertz radiation, sensing, microfluidic, refractive index sensor, waveguide, optical sensing
Protocols for Assessing Radiofrequency Interactions with Gold Nanoparticles and Biological Systems for Non-invasive Hyperthermia Cancer Therapy
Institutions: University of Texas M.D. Anderson Cancer Center, Rice University , Rice University .
Cancer therapies which are less toxic and invasive than their existing counterparts are highly desirable. The use of RF electric-fields that penetrate deep into the body, causing minimal toxicity, are currently being studied as a viable means of non-invasive cancer therapy. It is envisioned that the interactions of RF energy with internalized nanoparticles (NPs) can liberate heat which can then cause overheating (hyperthermia) of the cell, ultimately ending in cell necrosis.
In the case of non-biological systems, we present detailed protocols relating to quantifying the heat liberated by highly-concentrated NP colloids. For biological systems, in the case of in vitro
experiments, we describe the techniques and conditions which must be adhered to in order to effectively expose cancer cells to RF energy without bulk media heating artifacts significantly obscuring the data. Finally, we give a detailed methodology for in vivo
mouse models with ectopic hepatic cancer tumors.
Medicine, Issue 78, Electronics and Electrical Engineering, Life Sciences (General), Radiofrequency, Cancer, Nanoparticles, Hyperthermia, Gold
Quantification of Fungal Colonization, Sporogenesis, and Production of Mycotoxins Using Kernel Bioassays
Institutions: Texas A&M University.
The rotting of grains by seed-infecting fungi poses one of the greatest economic challenges to cereal production worldwide, not to mention serious risks to human and animal health. Among cereal production, maize is arguably the most affected crop, due to pathogen-induced losses in grain integrity and mycotoxin seed contamination. The two most prevalent and problematic mycotoxins for maize growers and food and feed processors are aflatoxin and fumonisin, produced by Aspergillus flavus
and Fusarium verticillioides
Recent studies in molecular plant-pathogen interactions have demonstrated promise in understanding specific mechanisms associated with plant responses to fungal infection and mycotoxin contamination1,2,3,4,5,6
. Because many labs are using kernel assays to study plant-pathogen interactions, there is a need for a standardized method for quantifying different biological parameters, so results from different laboratories can be cross-interpreted. For a robust and reproducible means for quantitative analyses on seeds, we have developed in-lab kernel assays and subsequent methods to quantify fungal growth, biomass, and mycotoxin contamination. Four sterilized maize kernels are inoculated in glass vials with a fungal suspension (106
) and incubated for a predetermined period. Sample vials are then selected for enumeration of conidia by hemocytometer, ergosterol-based biomass analysis by high performance liquid chromatography (HPLC), aflatoxin quantification using an AflaTest fluorometer method, and fumonisin quantification by HPLC.
Immunology, Issue 62, Mycotoxins, sporogenesis, Aspergillus flavus, Fusarium verticillioides, aflatoxin, fumonisin, plant-microbe interactions, plant biology
Ice-Cap: A Method for Growing Arabidopsis and Tomato Plants in 96-well Plates for High-Throughput Genotyping
Institutions: University of Wisconsin-Madison, Oregon State University .
It is becoming common for plant scientists to develop projects that require the genotyping of large numbers of plants. The first step in any genotyping project is to collect a tissue sample from each individual plant. The traditional approach to this task is to sample plants one-at-a-time. If one wishes to genotype hundreds or thousands of individuals, however, using this strategy results in a significant bottleneck in the genotyping pipeline. The Ice-Cap method that we describe here provides a high-throughput solution to this challenge by allowing one scientist to collect tissue from several thousand seedlings in a single day 1,2
. This level of throughput is made possible by the fact that tissue is harvested from plants 96-at-a-time, rather than one-at-a-time.
The Ice-Cap method provides an integrated platform for performing seedling growth, tissue harvest, and DNA extraction. The basis for Ice-Cap is the growth of seedlings in a stacked pair of 96-well plates. The wells of the upper plate contain plugs of agar growth media on which individual seedlings germinate. The roots grow down through the agar media, exit the upper plate through a hole, and pass into a lower plate containing water. To harvest tissue for DNA extraction, the water in the lower plate containing root tissue is rapidly frozen while the seedlings in the upper plate remain at room temperature. The upper plate is then peeled away from the lower plate, yielding one plate with 96 root tissue samples frozen in ice and one plate with 96 viable seedlings. The technique is named "Ice-Cap" because it uses ice to capture the root tissue. The 96-well plate containing the seedlings can then wrapped in foil and transferred to low temperature. This process suspends further growth of the seedlings, but does not affect their viability. Once genotype analysis has been completed, seedlings with the desired genotype can be transferred from the 96-well plate to soil for further propagation. We have demonstrated the utility of the Ice-Cap method using Arabidopsis thaliana
, tomato, and rice seedlings. We expect that the method should also be applicable to other species of plants with seeds small enough to fit into the wells of 96-well plates.
Plant Biology, Issue 57, Plant, Arabidopsis thaliana, tomato, 96-well plate, DNA extraction, high-throughput, genotyping
Fast and Accurate Exhaled Breath Ammonia Measurement
Institutions: St. Luke's University Hospital, Johns Hopkins School of Medicine, Johns Hopkins University.
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels.
Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive.
The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
Medicine, Issue 88, Breath, ammonia, breath measurement, breath analysis, QEPAS, volatile organic compound
Annotation of Plant Gene Function via Combined Genomics, Metabolomics and Informatics
Given the ever expanding number of model plant species for which complete genome sequences are available and the abundance of bio-resources such as knockout mutants, wild accessions and advanced breeding populations, there is a rising burden for gene functional annotation. In this protocol, annotation of plant gene function using combined co-expression gene analysis, metabolomics and informatics is provided (Figure 1
). This approach is based on the theory of using target genes of known function to allow the identification of non-annotated genes likely to be involved in a certain metabolic process, with the identification of target compounds via metabolomics. Strategies are put forward for applying this information on populations generated by both forward and reverse genetics approaches in spite of none of these are effortless. By corollary this approach can also be used as an approach to characterise unknown peaks representing new or specific secondary metabolites in the limited tissues, plant species or stress treatment, which is currently the important trial to understanding plant metabolism.
Plant Biology, Issue 64, Genetics, Bioinformatics, Metabolomics, Plant metabolism, Transcriptome analysis, Functional annotation, Computational biology, Plant biology, Theoretical biology, Spectroscopy and structural analysis
Design and Construction of an Urban Runoff Research Facility
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2
facility was constructed which consists of 24 individual 33.6 m2
field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4
, and Ca2+
had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Quantitative Detection of Trace Explosive Vapors by Programmed Temperature Desorption Gas Chromatography-Electron Capture Detector
Institutions: U.S. Naval Research Laboratory, NOVA Research, Inc., U.S. Naval Research Laboratory, U.S. Naval Research Laboratory.
The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e.
, samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.
Chemistry, Issue 89,
Gas Chromatography (GC), Electron Capture Detector, Explosives, Quantitation, Thermal Desorption, TNT, RDX
Focussed Ion Beam Milling and Scanning Electron Microscopy of Brain Tissue
Institutions: École Polytechnique Fédérale de Lausanne.
This protocol describes how biological samples, like brain tissue, can be imaged in three dimensions using the focussed ion beam/scanning electron microscope (FIB/SEM). The samples are fixed with aldehydes, heavy metal stained using osmium tetroxide and uranyl acetate. They are then dehydrated with alcohol and infiltrated with resin, which is then hardened. Using a light microscope and ultramicrotome with glass knives, a small block containing the region interest close to the surface is made. The block is then placed inside the FIB/SEM, and the ion beam used to roughly mill a vertical face along one side of the block, close to this region. Using backscattered electrons to image the underlying structures, a smaller face is then milled with a finer ion beam and the surface scrutinised more closely to determine the exact area of the face to be imaged and milled. The parameters of the microscope are then set so that the face is repeatedly milled and imaged so that serial images are collected through a volume of the block. The image stack will typically contain isotropic voxels with dimenions as small a 4 nm in each direction. This image quality in any imaging plane enables the user to analyse cell ultrastructure at any viewing angle within the image stack.
Neuroscience, Issue 53, Focussed ion beam, scanning electron microscopy, FIB
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+
release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Evaluation of Integrated Anaerobic Digestion and Hydrothermal Carbonization for Bioenergy Production
Institutions: Leibniz Institute for Agricultural Engineering.
Lignocellulosic biomass is one of the most abundant yet underutilized renewable energy resources. Both anaerobic digestion (AD) and hydrothermal carbonization (HTC) are promising technologies for bioenergy production from biomass in terms of biogas and HTC biochar, respectively. In this study, the combination of AD and HTC is proposed to increase overall bioenergy production. Wheat straw was anaerobically digested in a novel upflow anaerobic solid state reactor (UASS) in both mesophilic (37 °C) and thermophilic (55 °C) conditions. Wet digested from thermophilic AD was hydrothermally carbonized at 230 °C for 6 hr for HTC biochar production. At thermophilic temperature, the UASS system yields an average of 165 LCH4
(VS: volatile solids) and 121 L CH4
at mesophilic AD over the continuous operation of 200 days. Meanwhile, 43.4 g of HTC biochar with 29.6 MJ/kgdry_biochar
was obtained from HTC of 1 kg digestate (dry basis) from mesophilic AD. The combination of AD and HTC, in this particular set of experiment yield 13.2 MJ of energy per 1 kg of dry wheat straw, which is at least 20% higher than HTC alone and 60.2% higher than AD only.
Environmental Sciences, Issue 88, Biomethane, Hydrothermal Carbonization (HTC), Calorific Value, Lignocellulosic Biomass, UASS, Anaerobic Digestion
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Physical, Chemical and Biological Characterization of Six Biochars Produced for the Remediation of Contaminated Sites
Institutions: Royal Military College of Canada, Queen's University.
The physical and chemical properties of biochar vary based on feedstock sources and production conditions, making it possible to engineer biochars with specific functions (e.g.
carbon sequestration, soil quality improvements, or contaminant sorption). In 2013, the International Biochar Initiative (IBI) made publically available their Standardized Product Definition and Product Testing Guidelines (Version 1.1) which set standards for physical and chemical characteristics for biochar. Six biochars made from three different feedstocks and at two temperatures were analyzed for characteristics related to their use as a soil amendment. The protocol describes analyses of the feedstocks and biochars and includes: cation exchange capacity (CEC), specific surface area (SSA), organic carbon (OC) and moisture percentage, pH, particle size distribution, and proximate and ultimate analysis. Also described in the protocol are the analyses of the feedstocks and biochars for contaminants including polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs), metals and mercury as well as nutrients (phosphorous, nitrite and nitrate and ammonium as nitrogen). The protocol also includes the biological testing procedures, earthworm avoidance and germination assays. Based on the quality assurance / quality control (QA/QC) results of blanks, duplicates, standards and reference materials, all methods were determined adequate for use with biochar and feedstock materials. All biochars and feedstocks were well within the criterion set by the IBI and there were little differences among biochars, except in the case of the biochar produced from construction waste materials. This biochar (referred to as Old biochar) was determined to have elevated levels of arsenic, chromium, copper, and lead, and failed the earthworm avoidance and germination assays. Based on these results, Old biochar would not be appropriate for use as a soil amendment for carbon sequestration, substrate quality improvements or remediation.
Environmental Sciences, Issue 93, biochar, characterization, carbon sequestration, remediation, International Biochar Initiative (IBI), soil amendment
Cryo-electron Microscopy Specimen Preparation By Means Of a Focused Ion Beam
Institutions: Uppsala University, Gatan Inc., Swedish University of Agricultural Sciences, University of Oslo.
Here we present a protocol used to prepare cryo-TEM samples of Aspergillus niger
spores, but which can easily be adapted for any number of microorganisms or solutions. We make use of a custom built cryo-transfer station and a modified cryo-SEM preparation chamber2
. The spores are taken from a culture, plunge-frozen in a liquid nitrogen slush and observed in the cryo-SEM to select a region of interest. A thin lamella is then extracted using the FIB, attached to a TEM grid and subsequently thinned to electron transparency. The grid is transferred to a cryo-TEM holder and into a TEM for high resolution studies. Thanks to the introduction of a cooled nanomanipulator tip and a cryo-transfer station, this protocol is a straightforward adaptation to cryogenic temperature of the routinely used FIB preparation of TEM samples. As such it has the advantages of requiring a small amount of modifications to existing instruments, setups and procedures; it is easy to implement; it has a broad range of applications, in principle the same as for cryo-TEM sample preparation. One limitation is that it requires skillful handling of the specimens at critical steps to avoid or minimize contaminations.
Bioengineering, Issue 89, Cryoelectron Microscopy, Life Sciences (General), Cryo-microscopy, Focused ion beam, Sample preparation, TEM, FIB
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana
plants with Agrobacteria
carrying launch vectors. Optimization of Agrobacterium
cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana
, N. excelsiana
× N. excelsior
) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium
harboring pBID4-GFP (Tobacco mosaic virus
-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium
laboratory strain GV3101 showed the highest protein production compared to Agrobacteria
laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria
strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana
resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus
, consequently the name Taq
PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to:
● Set up reactions and thermal cycling conditions for a conventional PCR experiment
● Understand the function of various reaction components and their overall effect on a PCR experiment
● Design and optimize a PCR experiment for any DNA template
● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Purification of Transcripts and Metabolites from Drosophila Heads
Institutions: University of Florida , University of Florida , University of Florida , University of Florida .
For the last decade, we have tried to understand the molecular and cellular mechanisms of neuronal degeneration using Drosophila
as a model organism. Although fruit flies provide obvious experimental advantages, research on neurodegenerative diseases has mostly relied on traditional techniques, including genetic interaction, histology, immunofluorescence, and protein biochemistry. These techniques are effective for mechanistic, hypothesis-driven studies, which lead to a detailed understanding of the role of single genes in well-defined biological problems. However, neurodegenerative diseases are highly complex and affect multiple cellular organelles and processes over time. The advent of new technologies and the omics age provides a unique opportunity to understand the global cellular perturbations underlying complex diseases. Flexible model organisms such as Drosophila
are ideal for adapting these new technologies because of their strong annotation and high tractability. One challenge with these small animals, though, is the purification of enough informational molecules (DNA, mRNA, protein, metabolites) from highly relevant tissues such as fly brains. Other challenges consist of collecting large numbers of flies for experimental replicates (critical for statistical robustness) and developing consistent procedures for the purification of high-quality biological material. Here, we describe the procedures for collecting thousands of fly heads and the extraction of transcripts and metabolites to understand how global changes in gene expression and metabolism contribute to neurodegenerative diseases. These procedures are easily scalable and can be applied to the study of proteomic and epigenomic contributions to disease.
Genetics, Issue 73, Biochemistry, Molecular Biology, Neurobiology, Neuroscience, Bioengineering, Cellular Biology, Anatomy, Neurodegenerative Diseases, Biological Assay, Drosophila, fruit fly, head separation, purification, mRNA, RNA, cDNA, DNA, transcripts, metabolites, replicates, SCA3, neurodegeneration, NMR, gene expression, animal model
A Method for Murine Islet Isolation and Subcapsular Kidney Transplantation
Institutions: The Ohio State University, The Ohio State University, The Ohio State University.
Since the early pioneering work of Ballinger and Reckard demonstrating that transplantation of islets of Langerhans into diabetic rodents could normalize their blood glucose levels, islet transplantation has been proposed to be a potential treatment for type 1 diabetes 1,2
. More recently, advances in human islet transplantation have further strengthened this view 1,3
. However, two major limitations prevent islet transplantation from being a widespread clinical reality: (a) the requirement for large numbers of islets per patient, which severely reduces the number of potential recipients, and (b) the need for heavy immunosuppression, which significantly affects the pediatric population of patients due to their vulnerability to long-term immunosuppression. Strategies that can overcome these limitations have the potential to enhance the therapeutic utility of islet transplantation.
Islet transplantation under the mouse kidney capsule is a widely accepted model to investigate various strategies to improve islet transplantation. This experiment requires the isolation of high quality islets and implantation of islets to the diabetic recipients. Both procedures require surgical steps that can be better demonstrated by video than by text. Here, we document the detailed steps for these procedures by both video and written protocol. We also briefly discuss different transplantation models: syngeneic, allogeneic, syngeneic autoimmune, and allogeneic autoimmune.
Medicine, Issue 50, islet isolation, islet transplantation, diabetes, murine, pancreas