JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Predicting grizzly bear density in Western north america.
PLoS ONE
PUBLISHED: 01-01-2013
Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.
Authors: Andreas Florian Haas, Ben Knowles, Yan Wei Lim, Tracey McDole Somera, Linda Wegley Kelly, Mark Hatay, Forest Rohwer.
Published: 11-05-2014
ABSTRACT
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
28 Related JoVE Articles!
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
Isolation of Cellular Lipid Droplets: Two Purification Techniques Starting from Yeast Cells and Human Placentas
Authors: Jaana Mannik, Alex Meyers, Paul Dalhaimer.
Institutions: University of Tennessee, University of Tennessee.
Lipid droplets are dynamic organelles that can be found in most eukaryotic and certain prokaryotic cells. Structurally, the droplets consist of a core of neutral lipids surrounded by a phospholipid monolayer. One of the most useful techniques in determining the cellular roles of droplets has been proteomic identification of bound proteins, which can be isolated along with the droplets. Here, two methods are described to isolate lipid droplets and their bound proteins from two wide-ranging eukaryotes: fission yeast and human placental villous cells. Although both techniques have differences, the main method - density gradient centrifugation - is shared by both preparations. This shows the wide applicability of the presented droplet isolation techniques. In the first protocol, yeast cells are converted into spheroplasts by enzymatic digestion of their cell walls. The resulting spheroplasts are then gently lysed in a loose-fitting homogenizer. Ficoll is added to the lysate to provide a density gradient, and the mixture is centrifuged three times. After the first spin, the lipid droplets are localized to the white-colored floating layer of the centrifuge tubes along with the endoplasmic reticulum (ER), the plasma membrane, and vacuoles. Two subsequent spins are used to remove these other three organelles. The result is a layer that has only droplets and bound proteins. In the second protocol, placental villous cells are isolated from human term placentas by enzymatic digestion with trypsin and DNase I. The cells are homogenized in a loose-fitting homogenizer. Low-speed and medium-speed centrifugation steps are used to remove unbroken cells, cellular debris, nuclei, and mitochondria. Sucrose is added to the homogenate to provide a density gradient and the mixture is centrifuged to separate the lipid droplets from the other cellular fractions. The purity of the lipid droplets in both protocols is confirmed by Western Blot analysis. The droplet fractions from both preps are suitable for subsequent proteomic and lipidomic analysis.
Bioengineering, Issue 86, Lipid droplet, lipid body, fat body, oil body, Yeast, placenta, placental villous cells, isolation, purification, density gradient centrifugation
50981
Play Button
Optimization and Utilization of Agrobacterium-mediated Transient Protein Production in Nicotiana
Authors: Moneim Shamloul, Jason Trusa, Vadim Mett, Vidadi Yusibov.
Institutions: Fraunhofer USA Center for Molecular Biotechnology.
Agrobacterium-mediated transient protein production in plants is a promising approach to produce vaccine antigens and therapeutic proteins within a short period of time. However, this technology is only just beginning to be applied to large-scale production as many technological obstacles to scale up are now being overcome. Here, we demonstrate a simple and reproducible method for industrial-scale transient protein production based on vacuum infiltration of Nicotiana plants with Agrobacteria carrying launch vectors. Optimization of Agrobacterium cultivation in AB medium allows direct dilution of the bacterial culture in Milli-Q water, simplifying the infiltration process. Among three tested species of Nicotiana, N. excelsiana (N. benthamiana × N. excelsior) was selected as the most promising host due to the ease of infiltration, high level of reporter protein production, and about two-fold higher biomass production under controlled environmental conditions. Induction of Agrobacterium harboring pBID4-GFP (Tobacco mosaic virus-based) using chemicals such as acetosyringone and monosaccharide had no effect on the protein production level. Infiltrating plant under 50 to 100 mbar for 30 or 60 sec resulted in about 95% infiltration of plant leaf tissues. Infiltration with Agrobacterium laboratory strain GV3101 showed the highest protein production compared to Agrobacteria laboratory strains LBA4404 and C58C1 and wild-type Agrobacteria strains at6, at10, at77 and A4. Co-expression of a viral RNA silencing suppressor, p23 or p19, in N. benthamiana resulted in earlier accumulation and increased production (15-25%) of target protein (influenza virus hemagglutinin).
Plant Biology, Issue 86, Agroinfiltration, Nicotiana benthamiana, transient protein production, plant-based expression, viral vector, Agrobacteria
51204
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Adaptation of Semiautomated Circulating Tumor Cell (CTC) Assays for Clinical and Preclinical Research Applications
Authors: Lori E. Lowes, Benjamin D. Hedley, Michael Keeney, Alison L. Allan.
Institutions: London Health Sciences Centre, Western University, London Health Sciences Centre, Lawson Health Research Institute, Western University.
The majority of cancer-related deaths occur subsequent to the development of metastatic disease. This highly lethal disease stage is associated with the presence of circulating tumor cells (CTCs). These rare cells have been demonstrated to be of clinical significance in metastatic breast, prostate, and colorectal cancers. The current gold standard in clinical CTC detection and enumeration is the FDA-cleared CellSearch system (CSS). This manuscript outlines the standard protocol utilized by this platform as well as two additional adapted protocols that describe the detailed process of user-defined marker optimization for protein characterization of patient CTCs and a comparable protocol for CTC capture in very low volumes of blood, using standard CSS reagents, for studying in vivo preclinical mouse models of metastasis. In addition, differences in CTC quality between healthy donor blood spiked with cells from tissue culture versus patient blood samples are highlighted. Finally, several commonly discrepant items that can lead to CTC misclassification errors are outlined. Taken together, these protocols will provide a useful resource for users of this platform interested in preclinical and clinical research pertaining to metastasis and CTCs.
Medicine, Issue 84, Metastasis, circulating tumor cells (CTCs), CellSearch system, user defined marker characterization, in vivo, preclinical mouse model, clinical research
51248
Play Button
Setting-up an In Vitro Model of Rat Blood-brain Barrier (BBB): A Focus on BBB Impermeability and Receptor-mediated Transport
Authors: Yves Molino, Françoise Jabès, Emmanuelle Lacassagne, Nicolas Gaudin, Michel Khrestchatisky.
Institutions: VECT-HORUS SAS, CNRS, NICN UMR 7259.
The blood brain barrier (BBB) specifically regulates molecular and cellular flux between the blood and the nervous tissue. Our aim was to develop and characterize a highly reproducible rat syngeneic in vitro model of the BBB using co-cultures of primary rat brain endothelial cells (RBEC) and astrocytes to study receptors involved in transcytosis across the endothelial cell monolayer. Astrocytes were isolated by mechanical dissection following trypsin digestion and were frozen for later co-culture. RBEC were isolated from 5-week-old rat cortices. The brains were cleaned of meninges and white matter, and mechanically dissociated following enzymatic digestion. Thereafter, the tissue homogenate was centrifuged in bovine serum albumin to separate vessel fragments from nervous tissue. The vessel fragments underwent a second enzymatic digestion to free endothelial cells from their extracellular matrix. The remaining contaminating cells such as pericytes were further eliminated by plating the microvessel fragments in puromycin-containing medium. They were then passaged onto filters for co-culture with astrocytes grown on the bottom of the wells. RBEC expressed high levels of tight junction (TJ) proteins such as occludin, claudin-5 and ZO-1 with a typical localization at the cell borders. The transendothelial electrical resistance (TEER) of brain endothelial monolayers, indicating the tightness of TJs reached 300 ohm·cm2 on average. The endothelial permeability coefficients (Pe) for lucifer yellow (LY) was highly reproducible with an average of 0.26 ± 0.11 x 10-3 cm/min. Brain endothelial cells organized in monolayers expressed the efflux transporter P-glycoprotein (P-gp), showed a polarized transport of rhodamine 123, a ligand for P-gp, and showed specific transport of transferrin-Cy3 and DiILDL across the endothelial cell monolayer. In conclusion, we provide a protocol for setting up an in vitro BBB model that is highly reproducible due to the quality assurance methods, and that is suitable for research on BBB transporters and receptors.
Medicine, Issue 88, rat brain endothelial cells (RBEC), mouse, spinal cord, tight junction (TJ), receptor-mediated transport (RMT), low density lipoprotein (LDL), LDLR, transferrin, TfR, P-glycoprotein (P-gp), transendothelial electrical resistance (TEER),
51278
Play Button
Laboratory Estimation of Net Trophic Transfer Efficiencies of PCB Congeners to Lake Trout (Salvelinus namaycush) from Its Prey
Authors: Charles P. Madenjian, Richard R. Rediske, James P. O'Keefe, Solomon R. David.
Institutions: U. S. Geological Survey, Grand Valley State University, Shedd Aquarium.
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.
Environmental Sciences, Issue 90, trophic transfer efficiency, polychlorinated biphenyl congeners, lake trout, activity, contaminants, accumulation, risk assessment, toxic equivalents
51496
Play Button
Design and Construction of an Urban Runoff Research Facility
Authors: Benjamin G. Wherley, Richard H. White, Kevin J. McInnes, Charles H. Fontanier, James C. Thomas, Jacqueline A. Aitkenhead-Peterson, Steven T. Kelly.
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
51540
Play Button
A Protocol for Conducting Rainfall Simulation to Study Soil Runoff
Authors: Leonard C. Kibet, Louis S. Saporito, Arthur L. Allen, Eric B. May, Peter J. A. Kleinman, Fawzy M. Hashem, Ray B. Bryant.
Institutions: University of Maryland Eastern Shore, USDA - Agricultural Research Service, University of Maryland Eastern Shore.
Rainfall is a driving force for the transport of environmental contaminants from agricultural soils to surficial water bodies via surface runoff. The objective of this study was to characterize the effects of antecedent soil moisture content on the fate and transport of surface applied commercial urea, a common form of nitrogen (N) fertilizer, following a rainfall event that occurs within 24 hr after fertilizer application. Although urea is assumed to be readily hydrolyzed to ammonium and therefore not often available for transport, recent studies suggest that urea can be transported from agricultural soils to coastal waters where it is implicated in harmful algal blooms. A rainfall simulator was used to apply a consistent rate of uniform rainfall across packed soil boxes that had been prewetted to different soil moisture contents. By controlling rainfall and soil physical characteristics, the effects of antecedent soil moisture on urea loss were isolated. Wetter soils exhibited shorter time from rainfall initiation to runoff initiation, greater total volume of runoff, higher urea concentrations in runoff, and greater mass loadings of urea in runoff. These results also demonstrate the importance of controlling for antecedent soil moisture content in studies designed to isolate other variables, such as soil physical or chemical characteristics, slope, soil cover, management, or rainfall characteristics. Because rainfall simulators are designed to deliver raindrops of similar size and velocity as natural rainfall, studies conducted under a standardized protocol can yield valuable data that, in turn, can be used to develop models for predicting the fate and transport of pollutants in runoff.
Environmental Sciences, Issue 86, Agriculture, Water Pollution, Water Quality, Technology, Industry, and Agriculture, Rainfall Simulator, Artificial Rainfall, Runoff, Packed Soil Boxes, Nonpoint Source, Urea
51664
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Preparation of Synaptic Plasma Membrane and Postsynaptic Density Proteins Using a Discontinuous Sucrose Gradient
Authors: Marie Kristel Bermejo, Marija Milenkovic, Ali Salahpour, Amy J. Ramsey.
Institutions: University of Toronto.
Neuronal subcellular fractionation techniques allow the quantification of proteins that are trafficked to and from the synapse. As originally described in the late 1960’s, proteins associated with the synaptic plasma membrane can be isolated by ultracentrifugation on a sucrose density gradient. Once synaptic membranes are isolated, the macromolecular complex known as the post-synaptic density can be subsequently isolated due to its detergent insolubility. The techniques used to isolate synaptic plasma membranes and post-synaptic density proteins remain essentially the same after 40 years, and are widely used in current neuroscience research. This article details the fractionation of proteins associated with the synaptic plasma membrane and post-synaptic density using a discontinuous sucrose gradient. Resulting protein preparations are suitable for western blotting or 2D DIGE analysis.
Neurobiology, Issue 91, brain, synapse, western blot, ultracentrifugation, SPM, PSD
51896
Play Button
The Cell-based L-Glutathione Protection Assays to Study Endocytosis and Recycling of Plasma Membrane Proteins
Authors: Kristine M. Cihil, Agnieszka Swiatecka-Urban.
Institutions: Children's Hospital of Pittsburgh of UPMC, University of Pittsburgh School of Medicine.
Membrane trafficking involves transport of proteins from the plasma membrane to the cell interior (i.e. endocytosis) followed by trafficking to lysosomes for degradation or to the plasma membrane for recycling. The cell based L-glutathione protection assays can be used to study endocytosis and recycling of protein receptors, channels, transporters, and adhesion molecules localized at the cell surface. The endocytic assay requires labeling of cell surface proteins with a cell membrane impermeable biotin containing a disulfide bond and the N-hydroxysuccinimide (NHS) ester at 4 ºC - a temperature at which membrane trafficking does not occur. Endocytosis of biotinylated plasma membrane proteins is induced by incubation at 37 ºC. Next, the temperature is decreased again to 4 ºC to stop endocytic trafficking and the disulfide bond in biotin covalently attached to proteins that have remained at the plasma membrane is reduced with L-glutathione. At this point, only proteins that were endocytosed remain protected from L-glutathione and thus remain biotinylated. After cell lysis, biotinylated proteins are isolated with streptavidin agarose, eluted from agarose, and the biotinylated protein of interest is detected by western blotting. During the recycling assay, after biotinylation cells are incubated at 37 °C to load endocytic vesicles with biotinylated proteins and the disulfide bond in biotin covalently attached to proteins remaining at the plasma membrane is reduced with L-glutathione at 4 ºC as in the endocytic assay. Next, cells are incubated again at 37 °C to allow biotinylated proteins from endocytic vesicles to recycle to the plasma membrane. Cells are then incubated at 4 ºC, and the disulfide bond in biotin attached to proteins that recycled to the plasma membranes is reduced with L-glutathione. The biotinylated proteins protected from L-glutathione are those that did not recycle to the plasma membrane.
Basic Protocol, Issue 82, Endocytosis, recycling, plasma membrane, cell surface, EZLink, Sulfo-NHS-SS-Biotin, L-Glutathione, GSH, thiol group, disulfide bond, epithelial cells, cell polarization
50867
Play Button
Viability Assays for Cells in Culture
Authors: Jessica M. Posimo, Ajay S. Unnithan, Amanda M. Gleixner, Hailey J. Choi, Yiran Jiang, Sree H. Pulugulla, Rehana K. Leak.
Institutions: Duquesne University.
Manual cell counts on a microscope are a sensitive means of assessing cellular viability but are time-consuming and therefore expensive. Computerized viability assays are expensive in terms of equipment but can be faster and more objective than manual cell counts. The present report describes the use of three such viability assays. Two of these assays are infrared and one is luminescent. Both infrared assays rely on a 16 bit Odyssey Imager. One infrared assay uses the DRAQ5 stain for nuclei combined with the Sapphire stain for cytosol and is visualized in the 700 nm channel. The other infrared assay, an In-Cell Western, uses antibodies against cytoskeletal proteins (α-tubulin or microtubule associated protein 2) and labels them in the 800 nm channel. The third viability assay is a commonly used luminescent assay for ATP, but we use a quarter of the recommended volume to save on cost. These measurements are all linear and correlate with the number of cells plated, but vary in sensitivity. All three assays circumvent time-consuming microscopy and sample the entire well, thereby reducing sampling error. Finally, all of the assays can easily be completed within one day of the end of the experiment, allowing greater numbers of experiments to be performed within short timeframes. However, they all rely on the assumption that cell numbers remain in proportion to signal strength after treatments, an assumption that is sometimes not met, especially for cellular ATP. Furthermore, if cells increase or decrease in size after treatment, this might affect signal strength without affecting cell number. We conclude that all viability assays, including manual counts, suffer from a number of caveats, but that computerized viability assays are well worth the initial investment. Using all three assays together yields a comprehensive view of cellular structure and function.
Cellular Biology, Issue 83, In-cell Western, DRAQ5, Sapphire, Cell Titer Glo, ATP, primary cortical neurons, toxicity, protection, N-acetyl cysteine, hormesis
50645
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Seawater Sampling and Collection
Authors: Elena Zaikova, Alyse Hawley, David A. Walsh, Steven J. Hallam.
Institutions: University of British Columbia - UBC.
This video documents methods for collecting coastal marine water samples and processing them for various downstream applications including biomass concentration, nucleic acid purification, cell abundance, nutrient and trace gas analyses. For today's demonstration samples were collected from the deck of the HMS John Strickland operating in Saanich Inlet. An A-frame derrick, with a multi-purpose winch and cable system, is used in combination with Niskin or Go-Flo water sampling bottles. Conductivity, Temperature, and Depth (CTD) sensors are also used to sample the underlying water mass. To minimize outgassing, trace gas samples are collected first. Then, nutrients, water chemistry, and cell counts are determined. Finally, waters are collected for biomass filtration. The set-up and collection time for a single cast is ~1.5 hours at a maximum depth of 215 meters. Therefore, a total of 6 hours is generally needed to complete the collection series described here.
Molecular Biology, Issue 28, microbial biomass, nucleic acids, nutrients, trace gas, ammonia, sulfide, seawater, fjord, hypoxic, Saanich Inlet
1159
Play Button
A Noninvasive Hair Sampling Technique to Obtain High Quality DNA from Elusive Small Mammals
Authors: Philippe Henry, Alison Henry, Michael A. Russello.
Institutions: University of British Columbia, Okanagan Campus.
Noninvasive genetic sampling approaches are becoming increasingly important to study wildlife populations. A number of studies have reported using noninvasive sampling techniques to investigate population genetics and demography of wild populations1. This approach has proven to be especially useful when dealing with rare or elusive species2. While a number of these methods have been developed to sample hair, feces and other biological material from carnivores and medium-sized mammals, they have largely remained untested in elusive small mammals. In this video, we present a novel, inexpensive and noninvasive hair snare targeted at an elusive small mammal, the American pika (Ochotona princeps). We describe the general set-up of the hair snare, which consists of strips of packing tape arranged in a web-like fashion and placed along travelling routes in the pikas’ habitat. We illustrate the efficiency of the snare at collecting a large quantity of hair that can then be collected and brought back to the lab. We then demonstrate the use of the DNA IQ system (Promega) to isolate DNA and showcase the utility of this method to amplify commonly used molecular markers including nuclear microsatellites, amplified fragment length polymorphisms (AFLPs), mitochondrial sequences (800bp) as well as a molecular sexing marker. Overall, we demonstrate the utility of this novel noninvasive hair snare as a sampling technique for wildlife population biologists. We anticipate that this approach will be applicable to a variety of small mammals, opening up areas of investigation within natural populations, while minimizing impact to study organisms.
Genetics, Issue 49, Conservation genetics, noninvasive genetic sampling, Hair snares, Microsatellites, AFLPs, American pika, Ochotona princeps
2791
Play Button
Modeling Neural Immune Signaling of Episodic and Chronic Migraine Using Spreading Depression In Vitro
Authors: Aya D. Pusic, Yelena Y. Grinberg, Heidi M. Mitchell, Richard P. Kraig.
Institutions: The University of Chicago Medical Center, The University of Chicago Medical Center.
Migraine and its transformation to chronic migraine are healthcare burdens in need of improved treatment options. We seek to define how neural immune signaling modulates the susceptibility to migraine, modeled in vitro using spreading depression (SD), as a means to develop novel therapeutic targets for episodic and chronic migraine. SD is the likely cause of migraine aura and migraine pain. It is a paroxysmal loss of neuronal function triggered by initially increased neuronal activity, which slowly propagates within susceptible brain regions. Normal brain function is exquisitely sensitive to, and relies on, coincident low-level immune signaling. Thus, neural immune signaling likely affects electrical activity of SD, and therefore migraine. Pain perception studies of SD in whole animals are fraught with difficulties, but whole animals are well suited to examine systems biology aspects of migraine since SD activates trigeminal nociceptive pathways. However, whole animal studies alone cannot be used to decipher the cellular and neural circuit mechanisms of SD. Instead, in vitro preparations where environmental conditions can be controlled are necessary. Here, it is important to recognize limitations of acute slices and distinct advantages of hippocampal slice cultures. Acute brain slices cannot reveal subtle changes in immune signaling since preparing the slices alone triggers: pro-inflammatory changes that last days, epileptiform behavior due to high levels of oxygen tension needed to vitalize the slices, and irreversible cell injury at anoxic slice centers. In contrast, we examine immune signaling in mature hippocampal slice cultures since the cultures closely parallel their in vivo counterpart with mature trisynaptic function; show quiescent astrocytes, microglia, and cytokine levels; and SD is easily induced in an unanesthetized preparation. Furthermore, the slices are long-lived and SD can be induced on consecutive days without injury, making this preparation the sole means to-date capable of modeling the neuroimmune consequences of chronic SD, and thus perhaps chronic migraine. We use electrophysiological techniques and non-invasive imaging to measure neuronal cell and circuit functions coincident with SD. Neural immune gene expression variables are measured with qPCR screening, qPCR arrays, and, importantly, use of cDNA preamplification for detection of ultra-low level targets such as interferon-gamma using whole, regional, or specific cell enhanced (via laser dissection microscopy) sampling. Cytokine cascade signaling is further assessed with multiplexed phosphoprotein related targets with gene expression and phosphoprotein changes confirmed via cell-specific immunostaining. Pharmacological and siRNA strategies are used to mimic and modulate SD immune signaling.
Neuroscience, Issue 52, innate immunity, hormesis, microglia, T-cells, hippocampus, slice culture, gene expression, laser dissection microscopy, real-time qPCR, interferon-gamma
2910
Play Button
Preparation of Synaptoneurosomes from Mouse Cortex using a Discontinuous Percoll-Sucrose Density Gradient
Authors: Pamela R. Westmark, Cara J. Westmark, Athavi Jeevananthan, James S. Malter.
Institutions: University of Wisconsin, University of Wisconsin.
Synaptoneurosomes (SNs) are obtained after homogenization and fractionation of mouse brain cortex. They are resealed vesicles or isolated terminals that break away from axon terminals when the cortical tissue is homogenized. The SNs retain pre- and postsynaptic characteristics, which makes them useful in the study of synaptic transmission. They retain the molecular machinery used in neuronal signaling and are capable of uptake, storage, and release of neurotransmitters. The production and isolation of active SNs can be problematic using medias like Ficoll, which can be cytotoxic and require extended centrifugation due to high density, and filtration and centrifugation methods, which can result in low activity due to mechanical damage of the SNs. However, the use of discontinuous Percoll-sucrose density gradients to isolate SNs provides a rapid method to produce good yields of translationally active SNs. The Percoll-sucrose gradient method is quick and gentle as it employs isotonic conditions, has fewer and shorter centrifugation spins and avoids centrifugation steps that pellet SNs and cause mechanical damage.
Neuroscience, Issue 55, synaptoneurosomes, synaptosomes, Percoll-sucrose gradients, neurons, synapse, cortex, mouse
3196
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Detection of Protein Palmitoylation in Cultured Hippocampal Neurons by Immunoprecipitation and Acyl-Biotin Exchange (ABE)
Authors: G. Stefano Brigidi, Shernaz X Bamji.
Institutions: University of British Columbia .
Palmitoylation is a post-translational lipid modification involving the attachment of a 16-carbon saturated fatty acid, palmitate, to cysteine residues of substrate proteins through a labile thioester bond [reviewed in1]. Palmitoylation of a substrate protein increases its hydrophobicity, and typically facilitates its trafficking toward cellular membranes. Recent studies have shown palmitoylation to be one of the most common lipid modifications in neurons1, 2, suggesting that palmitate turnover is an important mechanism by which these cells regulate the targeting and trafficking of proteins. The identification and detection of palmitoylated substrates can therefore better our understanding of protein trafficking in neurons. Detection of protein palmitoylation in the past has been technically hindered due to the lack of a consensus sequence among substrate proteins, and the reliance on metabolic labeling of palmitoyl-proteins with 3H-palmitate, a time-consuming biochemical assay with low sensitivity. Development of the Acyl-Biotin Exchange (ABE) assay enables more rapid and high sensitivity detection of palmitoylated proteins2-4, and is optimal for measuring the dynamic turnover of palmitate on neuronal proteins. The ABE assay is comprised of three biochemical steps (Figure 1): 1) irreversible blockade of unmodified cysteine thiol groups using N-ethylmaliemide (NEM), 2) specific cleavage and unmasking of the palmitoylated cysteine's thiol group by hydroxylamine (HAM), and 3) selective labeling of the palmitoylated cysteine using a thiol-reactive biotinylation reagent, biotin-BMCC. Purification of the thiol-biotinylated proteins following the ABE steps has differed, depending on the overall goal of the experiment. Here, we describe a method to purify a palmitoylated protein of interest in primary hippocampal neurons by an initial immunoprecipitation (IP) step using an antibody directed against the protein, followed by the ABE assay and western blotting to directly measure palmitoylation levels of that protein, which is termed the IP-ABE assay. Low-density cultures of embryonic rat hippocampal neurons have been widely used to study the localization, function, and trafficking of neuronal proteins, making them ideally suited for studying neuronal protein palmitoylation using the IP-ABE assay. The IP-ABE assay mainly requires standard IP and western blotting reagents, and is only limited by the availability of antibodies against the target substrate. This assay can easily be adapted for the purification and detection of transfected palmitoylated proteins in heterologous cell cultures, primary neuronal cultures derived from various brain tissues of both mouse and rat, and even primary brain tissue itself.
Neuroscience, Issue 72, Biochemistry, Neurobiology, Molecular Biology, Cellular Biology, Physiology, Proteins, synapse, cultured hippocampal neurons, palmitoylation, lipid, immunoprecipitation, western blotting, biotin, Acyl-Biotin Exchange, ABE, neuron, brain, cell culture, rat, mouse, animal model
50031
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Quantifying Agonist Activity at G Protein-coupled Receptors
Authors: Frederick J. Ehlert, Hinako Suga, Michael T. Griffin.
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors. Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb) is much greater than that for the inactive state (Ka). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε) (Figure 2). Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs and relative efficacy of an agonist 1,2. In this report, we show how to modify this analysis to estimate the agonist Kb value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb in absolute units of M-1. Our method of analyzing agonist concentration-response curves 3,4 consists of global nonlinear regression using the operational model 5. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs and a parameter proportional to efficacy (τ). The estimate of τKobs of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys). In this case, the Kb value of an agonist is equivalent to τKobssys 3. Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
3179
Play Button
Functional Mapping with Simultaneous MEG and EEG
Authors: Hesheng Liu, Naoaki Tanaka, Steven Stufflebeam, Seppo Ahlfors, Matti Hämäläinen.
Institutions: MGH - Massachusetts General Hospital.
We use magnetoencephalography (MEG) and electroencephalography (EEG) to locate and determine the temporal evolution in brain areas involved in the processing of simple sensory stimuli. We will use somatosensory stimuli to locate the hand somatosensory areas, auditory stimuli to locate the auditory cortices, visual stimuli in four quadrants of the visual field to locate the early visual areas. These type of experiments are used for functional mapping in epileptic and brain tumor patients to locate eloquent cortices. In basic neuroscience similar experimental protocols are used to study the orchestration of cortical activity. The acquisition protocol includes quality assurance procedures, subject preparation for the combined MEG/EEG study, and acquisition of evoked-response data with somatosensory, auditory, and visual stimuli. We also demonstrate analysis of the data using the equivalent current dipole model and cortically-constrained minimum-norm estimates. Anatomical MRI data are employed in the analysis for visualization and for deriving boundaries of tissue boundaries for forward modeling and cortical location and orientation constraints for the minimum-norm estimates.
JoVE neuroscience, Issue 40, neuroscience, brain, MEG, EEG, functional imaging
1668
Play Button
Small Volume (1-3L) Filtration of Coastal Seawater Samples
Authors: David A. Walsh, Elena Zaikova, Steven J. Hallam.
Institutions: University of British Columbia - UBC.
The workflow begins with the collection of coastal marine waters for downstream microbial community, nutrient and trace gas analyses. For today s demonstration samples were collected from the deck of the HMS John Strickland operating in Saanich Inlet. This video documents small volume (~1 L) filtration of microbial biomass from the water column. The protocol is an extension of the large volume sampling protocol described earlier, with one major difference: here, there is no pre-filtration step, so all size classes of biomass are collected down to the 0.22 μm filter cut-off. Samples collected this way are ideal for nucleic acid analysis. The set-up, filtration, and clean-up steps each take about 20-30 minutes. If using two peristaltic pumps simultaneously, up to 8 samples may be filtered at the same time. To prevent biofilm formation between sampling trips, all filtration equipment must be rinsed with dilute HCl and deionized water and autoclaved immediately after use.
Molecular Biology, Issue 28, microbiology, seawater, filtration, biomass concentration
1163
Play Button
Large Volume (20L+) Filtration of Coastal Seawater Samples
Authors: David A. Walsh, Elena Zaikova, Steven J. Hallam.
Institutions: University of British Columbia - UBC.
The workflow begins with the collection of coastal marine waters for downstream microbial community, nutrient and trace gas analyses. For this method, samples were collected from the deck of the HMS John Strickland operating in Saanich Inlet. This video documents large volume (≥20 L) filtration of microbial biomass, ranging between 0.22μm and 2.7μm in diameter, from the water column. Two 20L samples can be filtered simultaneously using a single pump unit equipped with four rotating heads. Filtration is done in the field on extended trips, or immediately upon return for day trips. It is important to record the amount of water passing through each sterivex filter unit. To prevent biofilm formation between sampling trips, all filtration equipment must be rinsed with dilute HCl and deionized water and autoclaved immediately after use. This procedure will take approximately 5 hours plus an additional hour for clean up.
Molecular Biology, Issue 28, microbial biomass, filtration, sterivex, GF/D, nucleic acids, seawater, fjord, hypoxic, Saanich Inlet
1161
Play Button
Interview: Glycolipid Antigen Presentation by CD1d and the Therapeutic Potential of NKT cell Activation
Authors: Mitchell Kronenberg.
Institutions: La Jolla Institute for Allergy and Immunology.
Natural Killer T cells (NKT) are critical determinants of the immune response to cancer, regulation of autioimmune disease, clearance of infectious agents, and the development of artheriosclerotic plaques. In this interview, Mitch Kronenberg discusses his laboratory's efforts to understand the mechanism through which NKT cells are activated by glycolipid antigens. Central to these studies is CD1d - the antigen presenting molecule that presents glycolipids to NKT cells. The advent of CD1d tetramer technology, a technique developed by the Kronenberg lab, is critical for the sorting and identification of subsets of specific glycolipid-reactive T cells. Mitch explains how glycolipid agonists are being used as therapeutic agents to activate NKT cells in cancer patients and how CD1d tetramers can be used to assess the state of the NKT cell population in vivo following glycolipid agonist therapy. Current status of ongoing clinical trials using these agonists are discussed as well as Mitch's prediction for areas in the field of immunology that will have emerging importance in the near future.
Immunology, Issue 10, Natural Killer T cells, NKT cells, CD1 Tetramers, antigen presentation, glycolipid antigens, CD1d, Mucosal Immunity, Translational Research
635
Play Button
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Authors: John Marshall, Koji Morikawa, Nicholas Manoukis, Charles Taylor.
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
227
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.