JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Efficiencies of internet-based digital and paper-based scientific surveys and the estimated costs and time for different-sized cohorts.
PLoS ONE
PUBLISHED: 01-01-2014
To evaluate the relative efficiencies of five Internet-based digital and three paper-based scientific surveys and to estimate the costs for different-sized cohorts.
Authors: Inti Zlobec, Guido Suter, Aurel Perren, Alessandro Lugli.
Published: 09-23-2014
ABSTRACT
Biomarker research relies on tissue microarrays (TMA). TMAs are produced by repeated transfer of small tissue cores from a ‘donor’ block into a ‘recipient’ block and then used for a variety of biomarker applications. The construction of conventional TMAs is labor intensive, imprecise, and time-consuming. Here, a protocol using next-generation Tissue Microarrays (ngTMA) is outlined. ngTMA is based on TMA planning and design, digital pathology, and automated tissue microarraying. The protocol is illustrated using an example of 134 metastatic colorectal cancer patients. Histological, statistical and logistical aspects are considered, such as the tissue type, specific histological regions, and cell types for inclusion in the TMA, the number of tissue spots, sample size, statistical analysis, and number of TMA copies. Histological slides for each patient are scanned and uploaded onto a web-based digital platform. There, they are viewed and annotated (marked) using a 0.6-2.0 mm diameter tool, multiple times using various colors to distinguish tissue areas. Donor blocks and 12 ‘recipient’ blocks are loaded into the instrument. Digital slides are retrieved and matched to donor block images. Repeated arraying of annotated regions is automatically performed resulting in an ngTMA. In this example, six ngTMAs are planned containing six different tissue types/histological zones. Two copies of the ngTMAs are desired. Three to four slides for each patient are scanned; 3 scan runs are necessary and performed overnight. All slides are annotated; different colors are used to represent the different tissues/zones, namely tumor center, invasion front, tumor/stroma, lymph node metastases, liver metastases, and normal tissue. 17 annotations/case are made; time for annotation is 2-3 min/case. 12 ngTMAs are produced containing 4,556 spots. Arraying time is 15-20 hr. Due to its precision, flexibility and speed, ngTMA is a powerful tool to further improve the quality of TMAs used in clinical and translational research.
18 Related JoVE Articles!
Play Button
Laboratory Estimation of Net Trophic Transfer Efficiencies of PCB Congeners to Lake Trout (Salvelinus namaycush) from Its Prey
Authors: Charles P. Madenjian, Richard R. Rediske, James P. O'Keefe, Solomon R. David.
Institutions: U. S. Geological Survey, Grand Valley State University, Shedd Aquarium.
A technique for laboratory estimation of net trophic transfer efficiency (γ) of polychlorinated biphenyl (PCB) congeners to piscivorous fish from their prey is described herein. During a 135-day laboratory experiment, we fed bloater (Coregonus hoyi) that had been caught in Lake Michigan to lake trout (Salvelinus namaycush) kept in eight laboratory tanks. Bloater is a natural prey for lake trout. In four of the tanks, a relatively high flow rate was used to ensure relatively high activity by the lake trout, whereas a low flow rate was used in the other four tanks, allowing for low lake trout activity. On a tank-by-tank basis, the amount of food eaten by the lake trout on each day of the experiment was recorded. Each lake trout was weighed at the start and end of the experiment. Four to nine lake trout from each of the eight tanks were sacrificed at the start of the experiment, and all 10 lake trout remaining in each of the tanks were euthanized at the end of the experiment. We determined concentrations of 75 PCB congeners in the lake trout at the start of the experiment, in the lake trout at the end of the experiment, and in bloaters fed to the lake trout during the experiment. Based on these measurements, γ was calculated for each of 75 PCB congeners in each of the eight tanks. Mean γ was calculated for each of the 75 PCB congeners for both active and inactive lake trout. Because the experiment was replicated in eight tanks, the standard error about mean γ could be estimated. Results from this type of experiment are useful in risk assessment models to predict future risk to humans and wildlife eating contaminated fish under various scenarios of environmental contamination.
Environmental Sciences, Issue 90, trophic transfer efficiency, polychlorinated biphenyl congeners, lake trout, activity, contaminants, accumulation, risk assessment, toxic equivalents
51496
Play Button
Analysis of RNA Processing Reactions Using Cell Free Systems: 3' End Cleavage of Pre-mRNA Substrates in vitro
Authors: Joseph Jablonski, Mark Clementz, Kevin Ryan, Susana T. Valente.
Institutions: The Scripps Research Institute, City College of New York.
The 3’ end of mammalian mRNAs is not formed by abrupt termination of transcription by RNA polymerase II (RNPII). Instead, RNPII synthesizes precursor mRNA beyond the end of mature RNAs, and an active process of endonuclease activity is required at a specific site. Cleavage of the precursor RNA normally occurs 10-30 nt downstream from the consensus polyA site (AAUAAA) after the CA dinucleotides. Proteins from the cleavage complex, a multifactorial protein complex of approximately 800 kDa, accomplish this specific nuclease activity. Specific RNA sequences upstream and downstream of the polyA site control the recruitment of the cleavage complex. Immediately after cleavage, pre-mRNAs are polyadenylated by the polyA polymerase (PAP) to produce mature stable RNA messages. Processing of the 3’ end of an RNA transcript may be studied using cellular nuclear extracts with specific radiolabeled RNA substrates. In sum, a long 32P-labeled uncleaved precursor RNA is incubated with nuclear extracts in vitro, and cleavage is assessed by gel electrophoresis and autoradiography. When proper cleavage occurs, a shorter 5’ cleaved product is detected and quantified. Here, we describe the cleavage assay in detail using, as an example, the 3’ end processing of HIV-1 mRNAs.
Infectious Diseases, Issue 87, Cleavage, Polyadenylation, mRNA processing, Nuclear extracts, 3' Processing Complex
51309
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
50977
Play Button
Using Continuous Data Tracking Technology to Study Exercise Adherence in Pulmonary Rehabilitation
Authors: Amanda K. Rizk, Rima Wardini, Emilie Chan-Thim, Barbara Trutschnigg, Amélie Forget, Véronique Pepin.
Institutions: Concordia University, Concordia University, Hôpital du Sacré-Coeur de Montréal.
Pulmonary rehabilitation (PR) is an important component in the management of respiratory diseases. The effectiveness of PR is dependent upon adherence to exercise training recommendations. The study of exercise adherence is thus a key step towards the optimization of PR programs. To date, mostly indirect measures, such as rates of participation, completion, and attendance, have been used to determine adherence to PR. The purpose of the present protocol is to describe how continuous data tracking technology can be used to measure adherence to a prescribed aerobic training intensity on a second-by-second basis. In our investigations, adherence has been defined as the percent time spent within a specified target heart rate range. As such, using a combination of hardware and software, heart rate is measured, tracked, and recorded during cycling second-by-second for each participant, for each exercise session. Using statistical software, the data is subsequently extracted and analyzed. The same protocol can be applied to determine adherence to other measures of exercise intensity, such as time spent at a specified wattage, level, or speed on the cycle ergometer. Furthermore, the hardware and software is also available to measure adherence to other modes of training, such as the treadmill, elliptical, stepper, and arm ergometer. The present protocol, therefore, has a vast applicability to directly measure adherence to aerobic exercise.
Medicine, Issue 81, Data tracking, exercise, rehabilitation, adherence, patient compliance, health behavior, user-computer interface.
50643
Play Button
The Generation of Higher-order Laguerre-Gauss Optical Beams for High-precision Interferometry
Authors: Ludovico Carbone, Paul Fulda, Charlotte Bond, Frank Brueckner, Daniel Brown, Mengyao Wang, Deepali Lodhia, Rebecca Palmer, Andreas Freise.
Institutions: University of Birmingham.
Thermal noise in high-reflectivity mirrors is a major impediment for several types of high-precision interferometric experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. This is for example the case of future gravitational wave observatories, whose sensitivity to gravitational wave signals is expected to be limited in the most sensitive frequency band, by atomic vibration of their mirror masses. One promising approach being pursued to overcome this limitation is to employ higher-order Laguerre-Gauss (LG) optical beams in place of the conventionally used fundamental mode. Owing to their more homogeneous light intensity distribution these beams average more effectively over the thermally driven fluctuations of the mirror surface, which in turn reduces the uncertainty in the mirror position sensed by the laser light. We demonstrate a promising method to generate higher-order LG beams by shaping a fundamental Gaussian beam with the help of diffractive optical elements. We show that with conventional sensing and control techniques that are known for stabilizing fundamental laser beams, higher-order LG modes can be purified and stabilized just as well at a comparably high level. A set of diagnostic tools allows us to control and tailor the properties of generated LG beams. This enabled us to produce an LG beam with the highest purity reported to date. The demonstrated compatibility of higher-order LG modes with standard interferometry techniques and with the use of standard spherical optics makes them an ideal candidate for application in a future generation of high-precision interferometry.
Physics, Issue 78, Optics, Astronomy, Astrophysics, Gravitational waves, Laser interferometry, Metrology, Thermal noise, Laguerre-Gauss modes, interferometry
50564
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Assessing Neurodegenerative Phenotypes in Drosophila Dopaminergic Neurons by Climbing Assays and Whole Brain Immunostaining
Authors: Maria Cecilia Barone, Dirk Bohmann.
Institutions: University of Rochester Medical Center .
Drosophila melanogaster is a valuable model organism to study aging and pathological degenerative processes in the nervous system. The advantages of the fly as an experimental system include its genetic tractability, short life span and the possibility to observe and quantitatively analyze complex behaviors. The expression of disease-linked genes in specific neuronal populations of the Drosophila brain, can be used to model human neurodegenerative diseases such as Parkinson's and Alzheimer's 5. Dopaminergic (DA) neurons are among the most vulnerable neuronal populations in the aging human brain. In Parkinson's disease (PD), the most common neurodegenerative movement disorder, the accelerated loss of DA neurons leads to a progressive and irreversible decline in locomotor function. In addition to age and exposure to environmental toxins, loss of DA neurons is exacerbated by specific mutations in the coding or promoter regions of several genes. The identification of such PD-associated alleles provides the experimental basis for the use of Drosophila as a model to study neurodegeneration of DA neurons in vivo. For example, the expression of the PD-linked human α-synuclein gene in Drosophila DA neurons recapitulates some features of the human disease, e.g. progressive loss of DA neurons and declining locomotor function 2. Accordingly, this model has been successfully used to identify potential therapeutic targets in PD 8. Here we describe two assays that have commonly been used to study age-dependent neurodegeneration of DA neurons in Drosophila: a climbing assay based on the startle-induced negative geotaxis response and tyrosine hydroxylase immunostaining of whole adult brain mounts to monitor the number of DA neurons at different ages. In both cases, in vivo expression of UAS transgenes specifically in DA neurons can be achieved by using a tyrosine hydroxylase (TH) promoter-Gal4 driver line 3, 10.
Neuroscience, Issue 74, Genetics, Neurobiology, Molecular Biology, Cellular Biology, Biomedical Engineering, Medicine, Developmental Biology, Drosophila melanogaster, neurodegenerative diseases, negative geotaxis, tyrosine hydroxylase, dopaminergic neuron, α-synuclein, neurons, immunostaining, animal model
50339
Play Button
High Throughput Single-cell and Multiple-cell Micro-encapsulation
Authors: Todd P. Lagus, Jon F. Edd.
Institutions: Vanderbilt University.
Microfluidic encapsulation methods have been previously utilized to capture cells in picoliter-scale aqueous, monodisperse drops, providing confinement from a bulk fluid environment with applications in high throughput screening, cytometry, and mass spectrometry. We describe a method to not only encapsulate single cells, but to repeatedly capture a set number of cells (here we demonstrate one- and two-cell encapsulation) to study both isolation and the interactions between cells in groups of controlled sizes. By combining drop generation techniques with cell and particle ordering, we demonstrate controlled encapsulation of cell-sized particles for efficient, continuous encapsulation. Using an aqueous particle suspension and immiscible fluorocarbon oil, we generate aqueous drops in oil with a flow focusing nozzle. The aqueous flow rate is sufficiently high to create ordering of particles which reach the nozzle at integer multiple frequencies of the drop generation frequency, encapsulating a controlled number of cells in each drop. For representative results, 9.9 μm polystyrene particles are used as cell surrogates. This study shows a single-particle encapsulation efficiency Pk=1 of 83.7% and a double-particle encapsulation efficiency Pk=2 of 79.5% as compared to their respective Poisson efficiencies of 39.3% and 33.3%, respectively. The effect of consistent cell and particle concentration is demonstrated to be of major importance for efficient encapsulation, and dripping to jetting transitions are also addressed. Introduction Continuous media aqueous cell suspensions share a common fluid environment which allows cells to interact in parallel and also homogenizes the effects of specific cells in measurements from the media. High-throughput encapsulation of cells into picoliter-scale drops confines the samples to protect drops from cross-contamination, enable a measure of cellular diversity within samples, prevent dilution of reagents and expressed biomarkers, and amplify signals from bioreactor products. Drops also provide the ability to re-merge drops into larger aqueous samples or with other drops for intercellular signaling studies.1,2 The reduction in dilution implies stronger detection signals for higher accuracy measurements as well as the ability to reduce potentially costly sample and reagent volumes.3 Encapsulation of cells in drops has been utilized to improve detection of protein expression,4 antibodies,5,6 enzymes,7 and metabolic activity8 for high throughput screening, and could be used to improve high throughput cytometry.9 Additional studies present applications in bio-electrospraying of cell containing drops for mass spectrometry10 and targeted surface cell coatings.11 Some applications, however, have been limited by the lack of ability to control the number of cells encapsulated in drops. Here we present a method of ordered encapsulation12 which increases the demonstrated encapsulation efficiencies for one and two cells and may be extrapolated for encapsulation of a larger number of cells. To achieve monodisperse drop generation, microfluidic "flow focusing" enables the creation of controllable-size drops of one fluid (an aqueous cell mixture) within another (a continuous oil phase) by using a nozzle at which the streams converge.13 For a given nozzle geometry, the drop generation frequency f and drop size can be altered by adjusting oil and aqueous flow rates Qoil and Qaq. As the flow rates increase, the flows may transition from drop generation to unstable jetting of aqueous fluid from the nozzle.14 When the aqueous solution contains suspended particles, particles become encapsulated and isolated from one another at the nozzle. For drop generation using a randomly distributed aqueous cell suspension, the average fraction of drops Dk containing k cells is dictated by Poisson statistics, where Dk = λk exp(-λ)/(k!) and λ is the average number of cells per drop. The fraction of cells which end up in the "correctly" encapsulated drops is calculated using Pk = (k x Dk)/Σ(k' x Dk'). The subtle difference between the two metrics is that Dk relates to the utilization of aqueous fluid and the amount of drop sorting that must be completed following encapsulation, and Pk relates to the utilization of the cell sample. As an example, one could use a dilute cell suspension (low λ) to encapsulate drops where most drops containing cells would contain just one cell. While the efficiency metric Pk would be high, the majority of drops would be empty (low Dk), thus requiring a sorting mechanism to remove empty drops, also reducing throughput.15 Combining drop generation with inertial ordering provides the ability to encapsulate drops with more predictable numbers of cells per drop and higher throughputs than random encapsulation. Inertial focusing was first discovered by Segre and Silberberg16 and refers to the tendency of finite-sized particles to migrate to lateral equilibrium positions in channel flow. Inertial ordering refers to the tendency of the particles and cells to passively organize into equally spaced, staggered, constant velocity trains. Both focusing and ordering require sufficiently high flow rates (high Reynolds number) and particle sizes (high Particle Reynolds number).17,18 Here, the Reynolds number Re =uDh and particle Reynolds number Rep =Re(a/Dh)2, where u is a characteristic flow velocity, Dh [=2wh/(w+h)] is the hydraulic diameter, ν is the kinematic viscosity, a is the particle diameter, w is the channel width, and h is the channel height. Empirically, the length required to achieve fully ordered trains decreases as Re and Rep increase. Note that the high Re and Rep requirements (for this study on the order of 5 and 0.5, respectively) may conflict with the need to keep aqueous flow rates low to avoid jetting at the drop generation nozzle. Additionally, high flow rates lead to higher shear stresses on cells, which are not addressed in this protocol. The previous ordered encapsulation study demonstrated that over 90% of singly encapsulated HL60 cells under similar flow conditions to those in this study maintained cell membrane integrity.12 However, the effect of the magnitude and time scales of shear stresses will need to be carefully considered when extrapolating to different cell types and flow parameters. The overlapping of the cell ordering, drop generation, and cell viability aqueous flow rate constraints provides an ideal operational regime for controlled encapsulation of single and multiple cells. Because very few studies address inter-particle train spacing,19,20 determining the spacing is most easily done empirically and will depend on channel geometry, flow rate, particle size, and particle concentration. Nonetheless, the equal lateral spacing between trains implies that cells arrive at predictable, consistent time intervals. When drop generation occurs at the same rate at which ordered cells arrive at the nozzle, the cells become encapsulated within the drop in a controlled manner. This technique has been utilized to encapsulate single cells with throughputs on the order of 15 kHz,12 a significant improvement over previous studies reporting encapsulation rates on the order of 60-160 Hz.4,15 In the controlled encapsulation work, over 80% of drops contained one and only one cell, a significant efficiency improvement over Poisson (random) statistics, which predicts less than 40% efficiency on average.12 In previous controlled encapsulation work,12 the average number of particles per drop λ was tuned to provide single-cell encapsulation. We hypothesize that through tuning of flow rates, we can efficiently encapsulate any number of cells per drop when λ is equal or close to the number of desired cells per drop. While single-cell encapsulation is valuable in determining individual cell responses from stimuli, multiple-cell encapsulation provides information relating to the interaction of controlled numbers and types of cells. Here we present a protocol, representative results using polystyrene microspheres, and discussion for controlled encapsulation of multiple cells using a passive inertial ordering channel and drop generation nozzle.
Bioengineering, Issue 64, Drop-based microfluidics, inertial microfluidics, ordering, focusing, cell encapsulation, single-cell biology, cell signaling
4096
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
3806
Play Button
Quantitative Imaging of Lineage-specific Toll-like Receptor-mediated Signaling in Monocytes and Dendritic Cells from Small Samples of Human Blood
Authors: Feng Qian, Ruth R. Montgomery.
Institutions: Yale University School of Medicine .
Individual variations in immune status determine responses to infection and contribute to disease severity and outcome. Aging is associated with an increased susceptibility to viral and bacterial infections and decreased responsiveness to vaccines with a well-documented decline in humoral as well as cell-mediated immune responses1,2. We have recently assessed the effects of aging on Toll-like receptors (TLRs), key components of the innate immune system that detect microbial infection and trigger antimicrobial host defense responses3. In a large cohort of healthy human donors, we showed that peripheral blood monocytes from the elderly have decreased expression and function of certain TLRs4 and similar reduced TLR levels and signaling responses in dendritic cells (DCs), antigen-presenting cells that are pivotal in the linkage between innate and adaptive immunity5. We have shown dysregulation of TLR3 in macrophages and lower production of IFN by DCs from elderly donors in response to infection with West Nile virus6,7. Paramount to our understanding of immunosenescence and to therapeutic intervention is a detailed understanding of specific cell types responding and the mechanism(s) of signal transduction. Traditional studies of immune responses through imaging of primary cells and surveying cell markers by FACS or immunoblot have advanced our understanding significantly, however, these studies are generally limited technically by the small sample volume available from patients and the inability to conduct complex laboratory techniques on multiple human samples. ImageStream combines quantitative flow cytometry with simultaneous high-resolution digital imaging and thus facilitates investigation in multiple cell populations contemporaneously for an efficient capture of patient susceptibility. Here we demonstrate the use of ImageStream in DCs to assess TLR7/8 activation-mediated increases in phosphorylation and nuclear translocation of a key transcription factor, NF-κB, which initiates transcription of numerous genes that are critical for immune responses8. Using this technology, we have also recently demonstrated a previously unrecognized alteration of TLR5 signaling and the NF-κB pathway in monocytes from older donors that may contribute to altered immune responsiveness in aging9.
Immunology, Issue 62, monocyte, dendritic cells, Toll-like receptors, fluorescent imaging, signaling, FACS, aging
3741
Play Button
Production of Tissue Microarrays, Immunohistochemistry Staining and Digitalization Within the Human Protein Atlas
Authors: Caroline Kampf, IngMarie Olsson, Urban Ryberg, Evelina Sjöstedt, Fredrik Pontén.
Institutions: Uppsala University .
The tissue microarray (TMA) technology provides the means for high-throughput analysis of multiple tissues and cells. The technique is used within the Human Protein Atlas project for global analysis of protein expression patterns in normal human tissues, cancer and cell lines. Here we present the assembly of 1 mm cores, retrieved from microscopically selected representative tissues, into a single recipient TMA block. The number and size of cores in a TMA block can be varied from approximately forty 2 mm cores to hundreds of 0.6 mm cores. The advantage of using TMA technology is that large amount of data can rapidly be obtained using a single immunostaining protocol to avoid experimental variability. Importantly, only limited amount of scarce tissue is needed, which allows for the analysis of large patient cohorts 1 2. Approximately 250 consecutive sections (4 μm thick) can be cut from a TMA block and used for immunohistochemical staining to determine specific protein expression patterns for 250 different antibodies. In the Human Protein Atlas project, antibodies are generated towards all human proteins and used to acquire corresponding protein profiles in both normal human tissues from 144 individuals and cancer tissues from 216 different patients, representing the 20 most common forms of human cancer. Immunohistochemically stained TMA sections on glass slides are scanned to create high-resolution images from which pathologists can interpret and annotate the outcome of immunohistochemistry. Images together with corresponding pathology-based annotation data are made publically available for the research community through the Human Protein Atlas portal (www.proteinatlas.org) (Figure 1) 3 4. The Human Protein Atlas provides a map showing the distribution and relative abundance of proteins in the human body. The current version contains over 11 million images with protein expression data for 12.238 unique proteins, corresponding to more than 61% of all proteins encoded by the human genome.
Genetics, Issue 63, Immunology, Molecular Biology, tissue microarray, immunohistochemistry, slide scanning, the Human Protein Atlas, protein profiles
3620
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Experimental Manipulation of Body Size to Estimate Morphological Scaling Relationships in Drosophila
Authors: R. Craig Stillwell, Ian Dworkin, Alexander W. Shingleton, W. Anthony Frankino.
Institutions: University of Houston, Michigan State University.
The scaling of body parts is a central feature of animal morphology1-7. Within species, morphological traits need to be correctly proportioned to the body for the organism to function; larger individuals typically have larger body parts and smaller individuals generally have smaller body parts, such that overall body shape is maintained across a range of adult body sizes. The requirement for correct proportions means that individuals within species usually exhibit low variation in relative trait size. In contrast, relative trait size can vary dramatically among species and is a primary mechanism by which morphological diversity is produced. Over a century of comparative work has established these intra- and interspecific patterns3,4. Perhaps the most widely used approach to describe this variation is to calculate the scaling relationship between the size of two morphological traits using the allometric equation y=bxα, where x and y are the size of the two traits, such as organ and body size8,9. This equation describes the within-group (e.g., species, population) scaling relationship between two traits as both vary in size. Log-transformation of this equation produces a simple linear equation, log(y) = log(b) + αlog(x) and log-log plots of the size of different traits among individuals of the same species typically reveal linear scaling with an intercept of log(b) and a slope of α, called the 'allometric coefficient'9,10. Morphological variation among groups is described by differences in scaling relationship intercepts or slopes for a given trait pair. Consequently, variation in the parameters of the allometric equation (b and α) elegantly describes the shape variation captured in the relationship between organ and body size within and among biological groups (see 11,12). Not all traits scale linearly with each other or with body size (e.g., 13,14) Hence, morphological scaling relationships are most informative when the data are taken from the full range of trait sizes. Here we describe how simple experimental manipulation of diet can be used to produce the full range of body size in insects. This permits an estimation of the full scaling relationship for any given pair of traits, allowing a complete description of how shape covaries with size and a robust comparison of scaling relationship parameters among biological groups. Although we focus on Drosophila, our methodology should be applicable to nearly any fully metamorphic insect.
Developmental Biology, Issue 56, Drosophila, allometry, morphology, body size, scaling, insect
3162
Play Button
Bioassays for Monitoring Insecticide Resistance
Authors: Audra L.E. Miller, Kelly Tindall, B. Rogers Leonard.
Institutions: University of Missouri, Delta Research Center, Louisiana State University Agricultural Center.
Pest resistance to pesticides is an increasing problem because pesticides are an integral part of high-yielding production agriculture. When few products are labeled for an individual pest within a particular crop system, chemical control options are limited. Therefore, the same product(s) are used repeatedly and continual selection pressure is placed on the target pest. There are both financial and environmental costs associated with the development of resistant populations. The cost of pesticide resistance has been estimated at approximately $ 1.5 billion annually in the United States. This paper will describe protocols, currently used to monitor arthropod (specifically insects) populations for the development of resistance. The adult vial test is used to measure the toxicity to contact insecticides and a modification of this test is used for plant-systemic insecticides. In these bioassays, insects are exposed to technical grade insecticide and responses (mortality) recorded at a specific post-exposure interval. The mortality data are subjected to Log Dose probit analysis to generate estimates of a lethal concentration that provides mortality to 50% (LC50) of the target populations and a series of confidence limits (CL's) as estimates of data variability. When these data are collected for a range of insecticide-susceptible populations, the LC50 can be used as baseline data for future monitoring purposes. After populations have been exposed to products, the results can be compared to a previously determined LC50 using the same methodology.
Microbiology, Issue 46, Resistance monitoring, Insecticide Resistance, Pesticide Resistance, glass-vial bioassay
2129
Play Button
Quantifying Agonist Activity at G Protein-coupled Receptors
Authors: Frederick J. Ehlert, Hinako Suga, Michael T. Griffin.
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors. Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb) is much greater than that for the inactive state (Ka). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε) (Figure 2). Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs and relative efficacy of an agonist 1,2. In this report, we show how to modify this analysis to estimate the agonist Kb value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb in absolute units of M-1. Our method of analyzing agonist concentration-response curves 3,4 consists of global nonlinear regression using the operational model 5. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs and a parameter proportional to efficacy (τ). The estimate of τKobs of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys). In this case, the Kb value of an agonist is equivalent to τKobssys 3. Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
3179
Play Button
Imaging Protein-protein Interactions in vivo
Authors: Tom Seegar, William Barton.
Institutions: Virginia Commonwealth University.
Protein-protein interactions are a hallmark of all essential cellular processes. However, many of these interactions are transient, or energetically weak, preventing their identification and analysis through traditional biochemical methods such as co-immunoprecipitation. In this regard, the genetically encodable fluorescent proteins (GFP, RFP, etc.) and their associated overlapping fluorescence spectrum have revolutionized our ability to monitor weak interactions in vivo using Förster resonance energy transfer (FRET)1-3. Here, we detail our use of a FRET-based proximity assay for monitoring receptor-receptor interactions on the endothelial cell surface.
Cellular Biology, Issue 44, Förster resonance energy transfer (FRET), confocal microscopy, angiogenesis, fluorescent proteins, protein interactions, receptors
2149
Play Button
Digital Microfluidics for Automated Proteomic Processing
Authors: Mais J. Jebrail, Vivienne N. Luk, Steve C. C. Shih, Ryan Fobel, Alphonsus H. C. Ng, Hao Yang, Sergio L. S. Freire, Aaron R. Wheeler.
Institutions: University of Toronto, Donnelly Centre for Cellular and Biomolecular Research, University of Toronto.
Clinical proteomics has emerged as an important new discipline, promising the discovery of biomarkers that will be useful for early diagnosis and prognosis of disease. While clinical proteomic methods vary widely, a common characteristic is the need for (i) extraction of proteins from extremely heterogeneous fluids (i.e. serum, whole blood, etc.) and (ii) extensive biochemical processing prior to analysis. Here, we report a new digital microfluidics (DMF) based method integrating several processing steps used in clinical proteomics. This includes protein extraction, resolubilization, reduction, alkylation and enzymatic digestion. Digital microfluidics is a microscale fluid-handling technique in which nanoliter-microliter sized droplets are manipulated on an open surface. Droplets are positioned on top of an array of electrodes that are coated by a dielectric layer - when an electrical potential is applied to the droplet, charges accumulate on either side of the dielectric. The charges serve as electrostatic handles that can be used to control droplet position, and by biasing a sequence of electrodes in series, droplets can be made to dispense, move, merge, mix, and split on the surface. Therefore, DMF is a natural fit for carrying rapid, sequential, multistep, miniaturized automated biochemical assays. This represents a significant advance over conventional methods (relying on manual pipetting or robots), and has the potential to be a useful new tool in clinical proteomics. Mais J. Jebrail, Vivienne N. Luk, and Steve C. C. Shih contributed equally to this work. Sergio L. S. Freire's current address is at the University of the Sciences in Philadelphia located at 600 South 43rd Street, Philadelphia, PA 19104.
Bioengineering, Issue 33, digital microfluidics, protein processing, protein extraction, protein precipitation, biochemical assays, reduction, alkylation, digestion, automation, feedback
1603
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.