JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Changes in Extremely Hot Summers over the Global Land Area under Various Warming Targets.
PUBLISHED: 06-20-2015
Summer temperature extremes over the global land area were investigated by comparing 26 models of the fifth phase of the Coupled Model Intercomparison Project (CMIP5) with observations from the Goddard Institute for Space Studies (GISS) and the Climate Research Unit (CRU). Monthly data of the observations and models were averaged for each season, and statistics were calculated for individual models before averaging them to obtain ensemble means. The summers with temperature anomalies (relative to 1951-1980) exceeding 3? (? is based on the local internal variability) are defined as "extremely hot". The models well reproduced the statistical characteristics evolution, and partly captured the spatial distributions of historical summer temperature extremes. If the global mean temperature increases 2°C relative to the pre-industrial level, "extremely hot" summers are projected to occur over nearly 40% of the land area (multi-model ensemble mean projection). Summers that exceed 5? warming are projected to occur over approximately 10% of the global land area, which were rarely observed during the reference period. Scenarios reaching warming levels of 3°C to 5°C were also analyzed. After exceeding the 5°C warming target, "extremely hot" summers are projected to occur throughout the entire global land area, and summers that exceed 5? warming would become common over 70% of the land area. In addition, the areas affected by "extremely hot" summers are expected to rapidly expand by more than 25%/°C as the global mean temperature increases by up to 3°C before slowing to less than 16%/°C as the temperature continues to increase by more than 3°C. The area that experiences summers with warming of 5? or more above the warming target of 2°C is likely to maintain rapid expansion of greater than 17%/°C. To reduce the impacts and damage from severely hot summers, the global mean temperature increase should remain low.
Evaporation is directly influenced by the interactions between the atmosphere, land surface and soil subsurface. This work aims to experimentally study evaporation under various surface boundary conditions to improve our current understanding and characterization of this multiphase phenomenon as well as to validate numerical heat and mass transfer theories that couple Navier-Stokes flow in the atmosphere and Darcian flow in the porous media. Experimental data were collected using a unique soil tank apparatus interfaced with a small climate controlled wind tunnel. The experimental apparatus was instrumented with a suite of state of the art sensor technologies for the continuous and autonomous collection of soil moisture, soil thermal properties, soil and air temperature, relative humidity, and wind speed. This experimental apparatus can be used to generate data under well controlled boundary conditions, allowing for better control and gathering of accurate data at scales of interest not feasible in the field. Induced airflow at several distinct wind speeds over the soil surface resulted in unique behavior of heat and mass transfer during the different evaporative stages.
21 Related JoVE Articles!
Play Button
Measurement of Greenhouse Gas Flux from Agricultural Soils Using Static Chambers
Authors: Sarah M. Collier, Matthew D. Ruark, Lawrence G. Oates, William E. Jokela, Curtis J. Dell.
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, USDA-ARS Dairy Forage Research Center, USDA-ARS Pasture Systems Watershed Management Research Unit.
Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies.
Environmental Sciences, Issue 90, greenhouse gas, trace gas, gas flux, static chamber, soil, field, agriculture, climate
Play Button
Measuring Blood Pressure in Mice using Volume Pressure Recording, a Tail-cuff Method
Authors: Alan Daugherty, Debra Rateri, Lu Hong, Anju Balakrishnan.
Institutions: University of Kentucky.
The CODA 8-Channel High Throughput Non-Invasive Blood Pressure system measures the blood pressure in up to 8 mice or rats simultaneously. The CODA tail-cuff system uses Volume Pressure Recording (VPR) to measure the blood pressure by determining the tail blood volume. A specially designed differential pressure transducer and an occlusion tail-cuff measure the total blood volume in the tail without the need to obtain the individual pulse signal. Special attention is afforded to the length of the occlusion cuff in order to derive the most accurate blood pressure readings. VPR can easily obtain readings on dark-skinned rodents, such as C57BL6 mice and is MRI compatible. The CODA system provides you with measurements of six (6) different blood pressure parameters; systolic and diastolic blood pressure, heart rate, mean blood pressure, tail blood flow, and tail blood volume. Measurements can be made on either awake or anesthetized mice or rats. The CODA system includes a controller, laptop computer, software, cuffs, animal holders, infrared warming pads, and an infrared thermometer. There are seven different holder sizes for mice as small as 8 grams to rats as large as 900 grams.
Medicine, Issue 27, blood pressure, systolic, diastolic, tail-cuff, mouse, rat
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Play Button
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Authors: Mayandi Sivaguru, Glenn A. Fried, Carly A. H. Miller, Bruce W. Fouke.
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis and M. faveolata. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis and M. faveolata contain similar types of chlorophyll and chromatophores. However, M. annularis and M. faveolata exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Play Button
Integrated Field Lysimetry and Porewater Sampling for Evaluation of Chemical Mobility in Soils and Established Vegetation
Authors: Audrey R. Matteson, Denis J. Mahoney, Travis W. Gannon, Matthew L. Polizzotto.
Institutions: North Carolina State University, North Carolina State University.
Potentially toxic chemicals are routinely applied to land to meet growing demands on waste management and food production, but the fate of these chemicals is often not well understood. Here we demonstrate an integrated field lysimetry and porewater sampling method for evaluating the mobility of chemicals applied to soils and established vegetation. Lysimeters, open columns made of metal or plastic, are driven into bareground or vegetated soils. Porewater samplers, which are commercially available and use vacuum to collect percolating soil water, are installed at predetermined depths within the lysimeters. At prearranged times following chemical application to experimental plots, porewater is collected, and lysimeters, containing soil and vegetation, are exhumed. By analyzing chemical concentrations in the lysimeter soil, vegetation, and porewater, downward leaching rates, soil retention capacities, and plant uptake for the chemical of interest may be quantified. Because field lysimetry and porewater sampling are conducted under natural environmental conditions and with minimal soil disturbance, derived results project real-case scenarios and provide valuable information for chemical management. As chemicals are increasingly applied to land worldwide, the described techniques may be utilized to determine whether applied chemicals pose adverse effects to human health or the environment.
Environmental Sciences, Issue 89, Lysimetry, porewater, soil, chemical leaching, pesticides, turfgrass, waste
Play Button
A New Technique for Quantitative Analysis of Hair Loss in Mice Using Grayscale Analysis
Authors: Tulasi Ponnapakkam, Ranjitha Katikaneni, Rohan Gulati, Robert Gensure.
Institutions: Children's Hospital at Montefiore.
Alopecia is a common form of hair loss which can occur in many different conditions, including male-pattern hair loss, polycystic ovarian syndrome, and alopecia areata. Alopecia can also occur as a side effect of chemotherapy in cancer patients. In this study, our goal was to develop a consistent and reliable method to quantify hair loss in mice, which will allow investigators to accurately assess and compare new therapeutic approaches for these various forms of alopecia. The method utilizes a standard gel imager to obtain and process images of mice, measuring the light absorption, which occurs in rough proportion to the amount of black (or gray) hair on the mouse. Data that has been quantified in this fashion can then be analyzed using standard statistical techniques (i.e., ANOVA, T-test). This methodology was tested in mouse models of chemotherapy-induced alopecia, alopecia areata and alopecia from waxing. In this report, the detailed protocol is presented for performing these measurements, including validation data from C57BL/6 and C3H/HeJ strains of mice. This new technique offers a number of advantages, including relative simplicity of application, reliance on equipment which is readily available in most research laboratories, and applying an objective, quantitative assessment which is more robust than subjective evaluations. Improvements in quantification of hair growth in mice will improve study of alopecia models and facilitate evaluation of promising new therapies in preclinical studies.
Structural Biology, Issue 97, Alopecia, Mice, Grayscale, Hair, Chemotherapy-Induced Alopecia, Alopecia Areata
Play Button
The Use of High-resolution Infrared Thermography (HRIT) for the Study of Ice Nucleation and Ice Propagation in Plants
Authors: Michael Wisniewski, Gilbert Neuner, Lawrence V. Gusta.
Institutions: Agricultural Research Service (USDA-ARS), Kearneysville, WV, University of Innsbruck, University of Saskatechewan.
Freezing events that occur when plants are actively growing can be a lethal event, particularly if the plant has no freezing tolerance. Such frost events often have devastating effects on agricultural production and can also play an important role in shaping community structure in natural populations of plants, especially in alpine, sub-arctic, and arctic ecosystems. Therefore, a better understanding of the freezing process in plants can play an important role in the development of methods of frost protection and understanding mechanisms of freeze avoidance. Here, we describe a protocol to visualize the freezing process in plants using high-resolution infrared thermography (HRIT). The use of this technology allows one to determine the primary sites of ice formation in plants, how ice propagates, and the presence of ice barriers. Furthermore, it allows one to examine the role of extrinsic and intrinsic nucleators in determining the temperature at which plants freeze and evaluate the ability of various compounds to either affect the freezing process or increase freezing tolerance. The use of HRIT allows one to visualize the many adaptations that have evolved in plants, which directly or indirectly impact the freezing process and ultimately enables plants to survive frost events.
Environmental Sciences, Issue 99, Freeze avoidance, supercooling, ice nucleation active bacteria, frost tolerance, ice crystallization, antifreeze proteins, intrinsic nucleation, extrinsic nucleation, heterogeneous nucleation, homogeneous nucleation, differential thermal analysis
Play Button
Making Record-efficiency SnS Solar Cells by Thermal Evaporation and Atomic Layer Deposition
Authors: Rafael Jaramillo, Vera Steinmann, Chuanxi Yang, Katy Hartman, Rupak Chakraborty, Jeremy R. Poindexter, Mariela Lizet Castillo, Roy Gordon, Tonio Buonassisi.
Institutions: Massachusetts Institute of Technology, Massachusetts Institute of Technology, Harvard University, Massachusetts Institute of Technology, Harvard University.
Tin sulfide (SnS) is a candidate absorber material for Earth-abundant, non-toxic solar cells. SnS offers easy phase control and rapid growth by congruent thermal evaporation, and it absorbs visible light strongly. However, for a long time the record power conversion efficiency of SnS solar cells remained below 2%. Recently we demonstrated new certified record efficiencies of 4.36% using SnS deposited by atomic layer deposition, and 3.88% using thermal evaporation. Here the fabrication procedure for these record solar cells is described, and the statistical distribution of the fabrication process is reported. The standard deviation of efficiency measured on a single substrate is typically over 0.5%. All steps including substrate selection and cleaning, Mo sputtering for the rear contact (cathode), SnS deposition, annealing, surface passivation, Zn(O,S) buffer layer selection and deposition, transparent conductor (anode) deposition, and metallization are described. On each substrate we fabricate 11 individual devices, each with active area 0.25 cm2. Further, a system for high throughput measurements of current-voltage curves under simulated solar light, and external quantum efficiency measurement with variable light bias is described. With this system we are able to measure full data sets on all 11 devices in an automated manner and in minimal time. These results illustrate the value of studying large sample sets, rather than focusing narrowly on the highest performing devices. Large data sets help us to distinguish and remedy individual loss mechanisms affecting our devices.
Engineering, Issue 99, Solar cells, thin films, thermal evaporation, atomic layer deposition, annealing, tin sulfide
Play Button
How to Ignite an Atmospheric Pressure Microwave Plasma Torch without Any Additional Igniters
Authors: Martina Leins, Sandra Gaiser, Andreas Schulz, Matthias Walker, Uwe Schumacher, Thomas Hirth.
Institutions: University of Stuttgart.
This movie shows how an atmospheric pressure plasma torch can be ignited by microwave power with no additional igniters. After ignition of the plasma, a stable and continuous operation of the plasma is possible and the plasma torch can be used for many different applications. On one hand, the hot (3,600 K gas temperature) plasma can be used for chemical processes and on the other hand the cold afterglow (temperatures down to almost RT) can be applied for surface processes. For example chemical syntheses are interesting volume processes. Here the microwave plasma torch can be used for the decomposition of waste gases which are harmful and contribute to the global warming but are needed as etching gases in growing industry sectors like the semiconductor branch. Another application is the dissociation of CO2. Surplus electrical energy from renewable energy sources can be used to dissociate CO2 to CO and O2. The CO can be further processed to gaseous or liquid higher hydrocarbons thereby providing chemical storage of the energy, synthetic fuels or platform chemicals for the chemical industry. Applications of the afterglow of the plasma torch are the treatment of surfaces to increase the adhesion of lacquer, glue or paint, and the sterilization or decontamination of different kind of surfaces. The movie will explain how to ignite the plasma solely by microwave power without any additional igniters, e.g., electric sparks. The microwave plasma torch is based on a combination of two resonators — a coaxial one which provides the ignition of the plasma and a cylindrical one which guarantees a continuous and stable operation of the plasma after ignition. The plasma can be operated in a long microwave transparent tube for volume processes or shaped by orifices for surface treatment purposes.
Engineering, Issue 98, atmospheric pressure plasma, microwave plasma, plasma ignition, resonator structure, coaxial resonator, cylindrical resonator, plasma torch, stable plasma operation, continuous plasma operation, high speed camera
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Biocontained Carcass Composting for Control of Infectious Disease Outbreak in Livestock
Authors: Tim Reuter, Weiping Xu, Trevor W. Alexander, Brandon H. Gilroyed, G. Douglas Inglis, Francis J. Larney, Kim Stanford, Tim A. McAllister.
Institutions: Lethbridge Research Centre, Dalian University of Technology, Alberta Agriculture and Rural Development.
Intensive livestock production systems are particularly vulnerable to natural or intentional (bioterrorist) infectious disease outbreaks. Large numbers of animals housed within a confined area enables rapid dissemination of most infectious agents throughout a herd. Rapid containment is key to controlling any infectious disease outbreak, thus depopulation is often undertaken to prevent spread of a pathogen to the larger livestock population. In that circumstance, a large number of livestock carcasses and contaminated manure are generated that require rapid disposal. Composting lends itself as a rapid-response disposal method for infected carcasses as well as manure and soil that may harbor infectious agents. We designed a bio-contained mortality composting procedure and tested its efficacy for bovine tissue degradation and microbial deactivation. We used materials available on-farm or purchasable from local farm supply stores in order that the system can be implemented at the site of a disease outbreak. In this study, temperatures exceeded 55°C for more than one month and infectious agents implanted in beef cattle carcasses and manure were inactivated within 14 days of composting. After 147 days, carcasses were almost completely degraded. The few long bones remaining were further degraded with an additional composting cycle in open windrows and the final mature compost was suitable for land application. Duplicate compost structures (final dimensions 25 m x 5 m x 2.4 m; L x W x H) were constructed using barley straw bales and lined with heavy black silage plastic sheeting. Each was loaded with loose straw, carcasses and manure totaling ~95,000 kg. A 40-cm base layer of loose barley straw was placed in each bunker, onto which were placed 16 feedlot cattle mortalities (average weight 343 kg) aligned transversely at a spacing of approximately 0.5 m. For passive aeration, lengths of flexible, perforated plastic drainage tubing (15 cm diameter) were placed between adjacent carcasses, extending vertically along both inside walls, and with the ends passed though the plastic to the exterior. The carcasses were overlaid with moist aerated feedlot manure (~1.6 m deep) to the top of the bunker. Plastic was folded over the top and sealed with tape to establish a containment barrier and eight aeration vents (50 x 50 x 15 cm) were placed on the top of each structure to promote passive aeration. After 147 days, losses of volume and mass of composted materials averaged 39.8% and 23.7%, respectively, in each structure.
JoVE Infectious Diseases, Issue 39, compost, livestock, infectious disease, biocontainment
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Play Button
Local and Global Methods of Assessing Thermal Nociception in Drosophila Larvae
Authors: Abanti Chattopadhyay, A'Tondra V. Gilstrap, Michael J. Galko.
Institutions: The University of Texas MD Anderson Cancer Center, University of Houston-Downtown, University of Texas Graduate School of Biomedical Sciences, University of Texas Graduate School of Biomedical Sciences.
In this article, we demonstrate assays to study thermal nociception in Drosophila larvae. One assay involves spatially-restricted (local) stimulation of thermal nociceptors1,2 while the second involves a wholesale (global) activation of most or all such neurons3. Together, these techniques allow visualization and quantification of the behavioral functions of Drosophila nociceptive sensory neurons. The Drosophila larva is an established model system to study thermal nociception, a sensory response to potentially harmful temperatures that is evolutionarily conserved across species1,2. The advantages of Drosophila for such studies are the relative simplicity of its nervous system and the sophistication of the genetic techniques that can be used to dissect the molecular basis of the underlying biology4-6 In Drosophila, as in all metazoans, the response to noxious thermal stimuli generally involves a "nocifensive" aversive withdrawal to the presented stimulus7. Such stimuli are detected through free nerve endings or nociceptors and the amplitude of the organismal response depends on the number of nociceptors receiving the noxious stimulus8. In Drosophila, it is the class IV dendritic arborization sensory neurons that detect noxious thermal and mechanical stimuli9 in addition to their recently discovered role as photoreceptors10. These neurons, which have been very well studied at the developmental level, arborize over the barrier epidermal sheet and make contacts with nearly all epidermal cells11,12. The single axon of each class IV neuron projects into the ventral nerve cord of the central nervous system11 where they may connect to second-order neurons that project to the brain. Under baseline conditions, nociceptive sensory neurons will not fire until a relatively high threshold is reached. The assays described here allow the investigator to quantify baseline behavioral responses or, presumably, the sensitization that ensues following tissue damage. Each assay provokes distinct but related locomotory behavioral responses to noxious thermal stimuli and permits the researcher to visualize and quantify various aspects of thermal nociception in Drosophila larvae. The assays can be applied to larvae of desired genotypes or to larvae raised under different environmental conditions that might impact nociception. Since thermal nociception is conserved across species, the findings gleaned from genetic dissection in Drosophila will likely inform our understanding of thermal nociception in other species, including vertebrates.
Neuroscience, Issue 63, Drosophila sensory neurons, thermal nociception, nociceptive sensitization, tissue damage, fly behavioral response, dendritic arborization neurons, allodynia, hyperalgesia, behavioral assay
Play Button
Polymerase Chain Reaction: Basic Protocol Plus Troubleshooting and Optimization Strategies
Authors: Todd C. Lorenz.
Institutions: University of California, Los Angeles .
In the biological sciences there have been technological advances that catapult the discipline into golden ages of discovery. For example, the field of microbiology was transformed with the advent of Anton van Leeuwenhoek's microscope, which allowed scientists to visualize prokaryotes for the first time. The development of the polymerase chain reaction (PCR) is one of those innovations that changed the course of molecular science with its impact spanning countless subdisciplines in biology. The theoretical process was outlined by Keppe and coworkers in 1971; however, it was another 14 years until the complete PCR procedure was described and experimentally applied by Kary Mullis while at Cetus Corporation in 1985. Automation and refinement of this technique progressed with the introduction of a thermal stable DNA polymerase from the bacterium Thermus aquaticus, consequently the name Taq DNA polymerase. PCR is a powerful amplification technique that can generate an ample supply of a specific segment of DNA (i.e., an amplicon) from only a small amount of starting material (i.e., DNA template or target sequence). While straightforward and generally trouble-free, there are pitfalls that complicate the reaction producing spurious results. When PCR fails it can lead to many non-specific DNA products of varying sizes that appear as a ladder or smear of bands on agarose gels. Sometimes no products form at all. Another potential problem occurs when mutations are unintentionally introduced in the amplicons, resulting in a heterogeneous population of PCR products. PCR failures can become frustrating unless patience and careful troubleshooting are employed to sort out and solve the problem(s). This protocol outlines the basic principles of PCR, provides a methodology that will result in amplification of most target sequences, and presents strategies for optimizing a reaction. By following this PCR guide, students should be able to: ● Set up reactions and thermal cycling conditions for a conventional PCR experiment ● Understand the function of various reaction components and their overall effect on a PCR experiment ● Design and optimize a PCR experiment for any DNA template ● Troubleshoot failed PCR experiments
Basic Protocols, Issue 63, PCR, optimization, primer design, melting temperature, Tm, troubleshooting, additives, enhancers, template DNA quantification, thermal cycler, molecular biology, genetics
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Use of an Eight-arm Radial Water Maze to Assess Working and Reference Memory Following Neonatal Brain Injury
Authors: Stephanie C. Penley, Cynthia M. Gaudet, Steven W. Threlkeld.
Institutions: Rhode Island College, Rhode Island College.
Working and reference memory are commonly assessed using the land based radial arm maze. However, this paradigm requires pretraining, food deprivation, and may introduce scent cue confounds. The eight-arm radial water maze is designed to evaluate reference and working memory performance simultaneously by requiring subjects to use extra-maze cues to locate escape platforms and remedies the limitations observed in land based radial arm maze designs. Specifically, subjects are required to avoid the arms previously used for escape during each testing day (working memory) as well as avoid the fixed arms, which never contain escape platforms (reference memory). Re-entries into arms that have already been used for escape during a testing session (and thus the escape platform has been removed) and re-entries into reference memory arms are indicative of working memory deficits. Alternatively, first entries into reference memory arms are indicative of reference memory deficits. We used this maze to compare performance of rats with neonatal brain injury and sham controls following induction of hypoxia-ischemia and show significant deficits in both working and reference memory after eleven days of testing. This protocol could be easily modified to examine many other models of learning impairment.
Behavior, Issue 82, working memory, reference memory, hypoxia-ischemia, radial arm maze, water maze
Play Button
Direct Mouse Trauma/Burn Model of Heterotopic Ossification
Authors: Jonathan R. Peterson, Shailesh Agarwal, R. Cameron Brownley, Shawn J. Loder, Kavitha Ranganathan, Paul S. Cederna, Yuji Mishina, Stewart C. Wang, Benjamin Levi.
Institutions: University of Michigan Medical School, University of Michigan School of Dentistry.
Heterotopic ossification (HO) is the formation of bone outside of the skeleton which forms following major trauma, burn injuries, and orthopaedic surgical procedures. The majority of animal models used to study HO rely on the application of exogenous substances, such as bone morphogenetic protein (BMP), exogenous cell constructs, or genetic mutations in BMP signaling. While these models are useful they do not accurately reproduce the inflammatory states that cause the majority of cases of HO. Here we describe a burn/tenotomy model in mice that reliably produces focused HO. This protocol involves creating a 30% total body surface area partial thickness contact burn on the dorsal skin as well as division of the Achilles tendon at its midpoint. Relying solely on traumatic injury to induce HO at a predictable location allows for time-course study of endochondral heterotopic bone formation from intrinsic physiologic processes and environment only. This method could prove instrumental in understanding the inflammatory and osteogenic pathways involved in trauma-induced HO. Furthermore, because HO develops in a predictable location and time-course in this model, it allows for research to improve early imaging strategies and treatment modalities to prevent HO formation.
Medicine, Issue 102, Heterotopic Ossification, Burn injury, Mouse model, Inflammation, µCT, Achilles tenotomy
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.