JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.
PUBLISHED: 01-01-2014
This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.
Authors: Itamar Cohen, Yifat Geffen, Guy Ravid, Tommer Ravid.
Published: 11-06-2014
Protein degradation by the ubiquitin-proteasome system (UPS) is a major regulatory mechanism for protein homeostasis in all eukaryotes. The standard approach to determining intracellular protein degradation relies on biochemical assays for following the kinetics of protein decline. Such methods are often laborious and time consuming and therefore not amenable to experiments aimed at assessing multiple substrates and degradation conditions. As an alternative, cell growth-based assays have been developed, that are, in their conventional format, end-point assays that cannot quantitatively determine relative changes in protein levels. Here we describe a method that faithfully determines changes in protein degradation rates by coupling them to yeast cell-growth kinetics. The method is based on an established selection system where uracil auxotrophy of URA3-deleted yeast cells is rescued by an exogenously expressed reporter protein, comprised of a fusion between the essential URA3 gene and a degradation determinant (degron). The reporter protein is designed so that its synthesis rate is constant whilst its degradation rate is determined by the degron. As cell growth in uracil-deficient medium is proportional to the relative levels of Ura3, growth kinetics are entirely dependent on the reporter protein degradation. This method accurately measures changes in intracellular protein degradation kinetics. It was applied to: (a) Assessing the relative contribution of known ubiquitin-conjugating factors to proteolysis (b) E2 conjugating enzyme structure-function analyses (c) Identification and characterization of novel degrons. Application of the degron-URA3-based system transcends the protein degradation field, as it can also be adapted to monitoring changes of protein levels associated with functions of other cellular pathways.
25 Related JoVE Articles!
Play Button
Using Coculture to Detect Chemically Mediated Interspecies Interactions
Authors: Elizabeth Anne Shank.
Institutions: University of North Carolina at Chapel Hill .
In nature, bacteria rarely exist in isolation; they are instead surrounded by a diverse array of other microorganisms that alter the local environment by secreting metabolites. These metabolites have the potential to modulate the physiology and differentiation of their microbial neighbors and are likely important factors in the establishment and maintenance of complex microbial communities. We have developed a fluorescence-based coculture screen to identify such chemically mediated microbial interactions. The screen involves combining a fluorescent transcriptional reporter strain with environmental microbes on solid media and allowing the colonies to grow in coculture. The fluorescent transcriptional reporter is designed so that the chosen bacterial strain fluoresces when it is expressing a particular phenotype of interest (i.e. biofilm formation, sporulation, virulence factor production, etc.) Screening is performed under growth conditions where this phenotype is not expressed (and therefore the reporter strain is typically nonfluorescent). When an environmental microbe secretes a metabolite that activates this phenotype, it diffuses through the agar and activates the fluorescent reporter construct. This allows the inducing-metabolite-producing microbe to be detected: they are the nonfluorescent colonies most proximal to the fluorescent colonies. Thus, this screen allows the identification of environmental microbes that produce diffusible metabolites that activate a particular physiological response in a reporter strain. This publication discusses how to: a) select appropriate coculture screening conditions, b) prepare the reporter and environmental microbes for screening, c) perform the coculture screen, d) isolate putative inducing organisms, and e) confirm their activity in a secondary screen. We developed this method to screen for soil organisms that activate biofilm matrix-production in Bacillus subtilis; however, we also discuss considerations for applying this approach to other genetically tractable bacteria.
Microbiology, Issue 80, High-Throughput Screening Assays, Genes, Reporter, Microbial Interactions, Soil Microbiology, Coculture, microbial interactions, screen, fluorescent transcriptional reporters, Bacillus subtilis
Play Button
Using Flatbed Scanners to Collect High-resolution Time-lapsed Images of the Arabidopsis Root Gravitropic Response
Authors: Halie C Smith, Devon J Niewohner, Grant D Dewey, Autumn M Longo, Tracy L Guy, Bradley R Higgins, Sarah B Daehling, Sarah C. Genrich, Christopher D Wentworth, Tessa L Durham Brooks.
Institutions: Doane College, Doane College.
Research efforts in biology increasingly require use of methodologies that enable high-volume collection of high-resolution data. A challenge laboratories can face is the development and attainment of these methods. Observation of phenotypes in a process of interest is a typical objective of research labs studying gene function and this is often achieved through image capture. A particular process that is amenable to observation using imaging approaches is the corrective growth of a seedling root that has been displaced from alignment with the gravity vector. Imaging platforms used to measure the root gravitropic response can be expensive, relatively low in throughput, and/or labor intensive. These issues have been addressed by developing a high-throughput image capture method using inexpensive, yet high-resolution, flatbed scanners. Using this method, images can be captured every few minutes at 4,800 dpi. The current setup enables collection of 216 individual responses per day. The image data collected is of ample quality for image analysis applications.
Basic Protocol, Issue 83, root gravitropism, Arabidopsis, high-throughput phenotyping, flatbed scanners, image analysis, undergraduate research
Play Button
Lesion Explorer: A Video-guided, Standardized Protocol for Accurate and Reliable MRI-derived Volumetrics in Alzheimer's Disease and Normal Elderly
Authors: Joel Ramirez, Christopher J.M. Scott, Alicia A. McNeely, Courtney Berezuk, Fuqiang Gao, Gregory M. Szilagyi, Sandra E. Black.
Institutions: Sunnybrook Health Sciences Centre, University of Toronto.
Obtaining in vivo human brain tissue volumetrics from MRI is often complicated by various technical and biological issues. These challenges are exacerbated when significant brain atrophy and age-related white matter changes (e.g. Leukoaraiosis) are present. Lesion Explorer (LE) is an accurate and reliable neuroimaging pipeline specifically developed to address such issues commonly observed on MRI of Alzheimer's disease and normal elderly. The pipeline is a complex set of semi-automatic procedures which has been previously validated in a series of internal and external reliability tests1,2. However, LE's accuracy and reliability is highly dependent on properly trained manual operators to execute commands, identify distinct anatomical landmarks, and manually edit/verify various computer-generated segmentation outputs. LE can be divided into 3 main components, each requiring a set of commands and manual operations: 1) Brain-Sizer, 2) SABRE, and 3) Lesion-Seg. Brain-Sizer's manual operations involve editing of the automatic skull-stripped total intracranial vault (TIV) extraction mask, designation of ventricular cerebrospinal fluid (vCSF), and removal of subtentorial structures. The SABRE component requires checking of image alignment along the anterior and posterior commissure (ACPC) plane, and identification of several anatomical landmarks required for regional parcellation. Finally, the Lesion-Seg component involves manual checking of the automatic lesion segmentation of subcortical hyperintensities (SH) for false positive errors. While on-site training of the LE pipeline is preferable, readily available visual teaching tools with interactive training images are a viable alternative. Developed to ensure a high degree of accuracy and reliability, the following is a step-by-step, video-guided, standardized protocol for LE's manual procedures.
Medicine, Issue 86, Brain, Vascular Diseases, Magnetic Resonance Imaging (MRI), Neuroimaging, Alzheimer Disease, Aging, Neuroanatomy, brain extraction, ventricles, white matter hyperintensities, cerebrovascular disease, Alzheimer disease
Play Button
Oscillation and Reaction Board Techniques for Estimating Inertial Properties of a Below-knee Prosthesis
Authors: Jeremy D. Smith, Abbie E. Ferris, Gary D. Heise, Richard N. Hinrichs, Philip E. Martin.
Institutions: University of Northern Colorado, Arizona State University, Iowa State University.
The purpose of this study was two-fold: 1) demonstrate a technique that can be used to directly estimate the inertial properties of a below-knee prosthesis, and 2) contrast the effects of the proposed technique and that of using intact limb inertial properties on joint kinetic estimates during walking in unilateral, transtibial amputees. An oscillation and reaction board system was validated and shown to be reliable when measuring inertial properties of known geometrical solids. When direct measurements of inertial properties of the prosthesis were used in inverse dynamics modeling of the lower extremity compared with inertial estimates based on an intact shank and foot, joint kinetics at the hip and knee were significantly lower during the swing phase of walking. Differences in joint kinetics during stance, however, were smaller than those observed during swing. Therefore, researchers focusing on the swing phase of walking should consider the impact of prosthesis inertia property estimates on study outcomes. For stance, either one of the two inertial models investigated in our study would likely lead to similar outcomes with an inverse dynamics assessment.
Bioengineering, Issue 87, prosthesis inertia, amputee locomotion, below-knee prosthesis, transtibial amputee
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
Play Button
Isolation and Quantification of Botulinum Neurotoxin From Complex Matrices Using the BoTest Matrix Assays
Authors: F. Mark Dunning, Timothy M. Piazza, Füsûn N. Zeytin, Ward C. Tucker.
Institutions: BioSentinel Inc., Madison, WI.
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Neuroscience, Issue 85, Botulinum, food testing, detection, quantification, complex matrices, BoTest Matrix, Clostridium, potency testing
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Play Button
High-throughput Image Analysis of Tumor Spheroids: A User-friendly Software Application to Measure the Size of Spheroids Automatically and Accurately
Authors: Wenjin Chen, Chung Wong, Evan Vosburgh, Arnold J. Levine, David J. Foran, Eugenia Y. Xu.
Institutions: Raymond and Beverly Sackler Foundation, New Jersey, Rutgers University, Rutgers University, Institute for Advanced Study, New Jersey.
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application – SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary “Manual Initialize” and “Hand Draw” tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Cancer Biology, Issue 89, computer programming, high-throughput, image analysis, tumor spheroids, 3D, software application, cancer therapy, drug screen, neuroendocrine tumor cell line, BON-1, cancer research
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
A Novel Method for Localizing Reporter Fluorescent Beads Near the Cell Culture Surface for Traction Force Microscopy
Authors: Samantha G. Knoll, M. Yakut Ali, M. Taher A. Saif.
Institutions: University of Illinois at Urbana-Champaign.
PA gels have long been used as a platform to study cell traction forces due to ease of fabrication and the ability to tune their elastic properties. When the substrate is coated with an extracellular matrix protein, cells adhere to the gel and apply forces, causing the gel to deform. The deformation depends on the cell traction and the elastic properties of the gel. If the deformation field of the surface is known, surface traction can be calculated using elasticity theory. Gel deformation is commonly measured by embedding fluorescent marker beads uniformly into the gel. The probes displace as the gel deforms. The probes near the surface of the gel are tracked. The displacements reported by these probes are considered as surface displacements. Their depths from the surface are ignored. This assumption introduces error in traction force evaluations. For precise measurement of cell forces, it is critical for the location of the beads to be known. We have developed a technique that utilizes simple chemistry to confine fluorescent marker beads, 0.1 and 1 µm in diameter, in PA gels, within 1.6 μm of the surface. We coat a coverslip with poly-D-lysine (PDL) and fluorescent beads. PA gel solution is then sandwiched between the coverslip and an adherent surface. The fluorescent beads transfer to the gel solution during curing. After polymerization, the PA gel contains fluorescent beads on a plane close to the gel surface.
Bioengineering, Issue 91, cell mechanics, polyacrylamide (PA) gel, traction force microscopy, fluorescent beads, poly-D-lysine (PDL), cell culture surface
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Play Button
Models and Methods to Evaluate Transport of Drug Delivery Systems Across Cellular Barriers
Authors: Rasa Ghaffarian, Silvia Muro.
Institutions: University of Maryland, University of Maryland.
Sub-micrometer carriers (nanocarriers; NCs) enhance efficacy of drugs by improving solubility, stability, circulation time, targeting, and release. Additionally, traversing cellular barriers in the body is crucial for both oral delivery of therapeutic NCs into the circulation and transport from the blood into tissues, where intervention is needed. NC transport across cellular barriers is achieved by: (i) the paracellular route, via transient disruption of the junctions that interlock adjacent cells, or (ii) the transcellular route, where materials are internalized by endocytosis, transported across the cell body, and secreted at the opposite cell surface (transyctosis). Delivery across cellular barriers can be facilitated by coupling therapeutics or their carriers with targeting agents that bind specifically to cell-surface markers involved in transport. Here, we provide methods to measure the extent and mechanism of NC transport across a model cell barrier, which consists of a monolayer of gastrointestinal (GI) epithelial cells grown on a porous membrane located in a transwell insert. Formation of a permeability barrier is confirmed by measuring transepithelial electrical resistance (TEER), transepithelial transport of a control substance, and immunostaining of tight junctions. As an example, ~200 nm polymer NCs are used, which carry a therapeutic cargo and are coated with an antibody that targets a cell-surface determinant. The antibody or therapeutic cargo is labeled with 125I for radioisotope tracing and labeled NCs are added to the upper chamber over the cell monolayer for varying periods of time. NCs associated to the cells and/or transported to the underlying chamber can be detected. Measurement of free 125I allows subtraction of the degraded fraction. The paracellular route is assessed by determining potential changes caused by NC transport to the barrier parameters described above. Transcellular transport is determined by addressing the effect of modulating endocytosis and transcytosis pathways.
Bioengineering, Issue 80, Antigens, Enzymes, Biological Therapy, bioengineering (general), Pharmaceutical Preparations, Macromolecular Substances, Therapeutics, Digestive System and Oral Physiological Phenomena, Biological Phenomena, Cell Physiological Phenomena, drug delivery systems, targeted nanocarriers, transcellular transport, epithelial cells, tight junctions, transepithelial electrical resistance, endocytosis, transcytosis, radioisotope tracing, immunostaining
Play Button
OLIgo Mass Profiling (OLIMP) of Extracellular Polysaccharides
Authors: Markus Günl, Sascha Gille, Markus Pauly.
Institutions: University of California, Berkeley, University of California, Berkeley.
The direct contact of cells to the environment is mediated in many organisms by an extracellular matrix. One common aspect of extracellular matrices is that they contain complex sugar moieties in form of glycoproteins, proteoglycans, and/or polysaccharides. Examples include the extracellular matrix of humans and animal cells consisting mainly of fibrillar proteins and proteoglycans or the polysaccharide based cell walls of plants and fungi, and the proteoglycan/glycolipid based cell walls of bacteria. All these glycostructures play vital roles in cell-to-cell and cell-to-environment communication and signalling. An extraordinary complex example of an extracellular matrix is present in the walls of higher plant cells. Their wall is made almost entirely of sugars, up to 75% dry weight, and consists of the most abundant biopolymers present on this planet. Therefore, research is conducted how to utilize these materials best as a carbon-neutral renewable resource to replace petrochemicals derived from fossil fuel. The main challenge for fuel conversion remains the recalcitrance of walls to enzymatic or chemical degradation due to the unique glycostructures present in this unique biocomposite. Here, we present a method for the rapid and sensitive analysis of plant cell wall glycostructures. This method OLIgo Mass Profiling (OLIMP) is based the enzymatic release of oligosaccharides from wall materials facilitating specific glycosylhydrolases and subsequent analysis of the solubilized oligosaccharide mixtures using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF/MS)1 (Figure 1). OLIMP requires walls of only 5000 cells for a complete analysis, can be performed on the tissue itself2, and is amenable to high-throughput analyses3. While the absolute amount of the solubilized oligosaccharides cannot be determined by OLIMP the relative abundance of the various oligosaccharide ions can be delineated from the mass spectra giving insights about the substitution-pattern of the native polysaccharide present in the wall. OLIMP can be used to analyze a wide variety of wall polymers, limited only by the availability of specific enzymes4. For example, for the analysis of polymers present in the plant cell wall enzymes are available to analyse the hemicelluloses xyloglucan using a xyloglucanase5, 11, 12, 13, xylan using an endo-β-(1-4)-xylanase 6,7, or for pectic polysaccharides using a combination of a polygalacturonase and a methylesterase 8. Furthermore, using the same principles of OLIMP glycosylhydrolase and even glycosyltransferase activities can be monitored and determined 9.
Plant Biology, Issue 40, Extracellular matrix, cell walls, polysaccharides, glycosylhydrolase, MALDI-TOF mass spectrometry
Play Button
Mapping Cortical Dynamics Using Simultaneous MEG/EEG and Anatomically-constrained Minimum-norm Estimates: an Auditory Attention Example
Authors: Adrian K.C. Lee, Eric Larson, Ross K. Maddox.
Institutions: University of Washington.
Magneto- and electroencephalography (MEG/EEG) are neuroimaging techniques that provide a high temporal resolution particularly suitable to investigate the cortical networks involved in dynamical perceptual and cognitive tasks, such as attending to different sounds in a cocktail party. Many past studies have employed data recorded at the sensor level only, i.e., the magnetic fields or the electric potentials recorded outside and on the scalp, and have usually focused on activity that is time-locked to the stimulus presentation. This type of event-related field / potential analysis is particularly useful when there are only a small number of distinct dipolar patterns that can be isolated and identified in space and time. Alternatively, by utilizing anatomical information, these distinct field patterns can be localized as current sources on the cortex. However, for a more sustained response that may not be time-locked to a specific stimulus (e.g., in preparation for listening to one of the two simultaneously presented spoken digits based on the cued auditory feature) or may be distributed across multiple spatial locations unknown a priori, the recruitment of a distributed cortical network may not be adequately captured by using a limited number of focal sources. Here, we describe a procedure that employs individual anatomical MRI data to establish a relationship between the sensor information and the dipole activation on the cortex through the use of minimum-norm estimates (MNE). This inverse imaging approach provides us a tool for distributed source analysis. For illustrative purposes, we will describe all procedures using FreeSurfer and MNE software, both freely available. We will summarize the MRI sequences and analysis steps required to produce a forward model that enables us to relate the expected field pattern caused by the dipoles distributed on the cortex onto the M/EEG sensors. Next, we will step through the necessary processes that facilitate us in denoising the sensor data from environmental and physiological contaminants. We will then outline the procedure for combining and mapping MEG/EEG sensor data onto the cortical space, thereby producing a family of time-series of cortical dipole activation on the brain surface (or "brain movies") related to each experimental condition. Finally, we will highlight a few statistical techniques that enable us to make scientific inference across a subject population (i.e., perform group-level analysis) based on a common cortical coordinate space.
Neuroscience, Issue 68, Magnetoencephalography, MEG, Electroencephalography, EEG, audition, attention, inverse imaging
Play Button
Trajectory Data Analyses for Pedestrian Space-time Activity Study
Authors: Feng Qi, Fei Du.
Institutions: Kean University, University of Wisconsin-Madison.
It is well recognized that human movement in the spatial and temporal dimensions has direct influence on disease transmission1-3. An infectious disease typically spreads via contact between infected and susceptible individuals in their overlapped activity spaces. Therefore, daily mobility-activity information can be used as an indicator to measure exposures to risk factors of infection. However, a major difficulty and thus the reason for paucity of studies of infectious disease transmission at the micro scale arise from the lack of detailed individual mobility data. Previously in transportation and tourism research detailed space-time activity data often relied on the time-space diary technique, which requires subjects to actively record their activities in time and space. This is highly demanding for the participants and collaboration from the participants greatly affects the quality of data4. Modern technologies such as GPS and mobile communications have made possible the automatic collection of trajectory data. The data collected, however, is not ideal for modeling human space-time activities, limited by the accuracies of existing devices. There is also no readily available tool for efficient processing of the data for human behavior study. We present here a suite of methods and an integrated ArcGIS desktop-based visual interface for the pre-processing and spatiotemporal analyses of trajectory data. We provide examples of how such processing may be used to model human space-time activities, especially with error-rich pedestrian trajectory data, that could be useful in public health studies such as infectious disease transmission modeling. The procedure presented includes pre-processing, trajectory segmentation, activity space characterization, density estimation and visualization, and a few other exploratory analysis methods. Pre-processing is the cleaning of noisy raw trajectory data. We introduce an interactive visual pre-processing interface as well as an automatic module. Trajectory segmentation5 involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping6 and density volume rendering7. We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.
Environmental Sciences, Issue 72, Computer Science, Behavior, Infectious Diseases, Geography, Cartography, Data Display, Disease Outbreaks, cartography, human behavior, Trajectory data, space-time activity, GPS, GIS, ArcGIS, spatiotemporal analysis, visualization, segmentation, density surface, density volume, exploratory data analysis, modelling
Play Button
Direct Imaging of ER Calcium with Targeted-Esterase Induced Dye Loading (TED)
Authors: Samira Samtleben, Juliane Jaepel, Caroline Fecher, Thomas Andreska, Markus Rehberg, Robert Blum.
Institutions: University of Wuerzburg, Max Planck Institute of Neurobiology, Martinsried, Ludwig-Maximilians University of Munich.
Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca2+ indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca2+ indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro. TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca2+ indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca2+ indicator and a hydrophilic fluorescent dye/Ca2+ complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0.
Cellular Biology, Issue 75, Neurobiology, Neuroscience, Molecular Biology, Biochemistry, Biomedical Engineering, Bioengineering, Virology, Medicine, Anatomy, Physiology, Surgery, Endoplasmic Reticulum, ER, Calcium Signaling, calcium store, calcium imaging, calcium indicator, metabotropic signaling, Ca2+, neurons, cells, mouse, animal model, cell culture, targeted esterase induced dye loading, imaging
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
Play Button
Process of Making Three-dimensional Microstructures using Vaporization of a Sacrificial Component
Authors: Du T. Nguyen, Y. T. Leho, Aaron P. Esser-Kahn.
Institutions: University of California, Irvine, University of California, Irvine.
Vascular structures in natural systems are able to provide high mass transport through high surface areas and optimized structure. Few synthetic material fabrication techniques are able to mimic the complexity of these structures while maintaining scalability. The Vaporization of a Sacrificial Component (VaSC) process is able to do so. This process uses sacrificial fibers as a template to form hollow, cylindrical microchannels embedded within a matrix. Tin (II) oxalate (SnOx) is embedded within poly(lactic) acid (PLA) fibers which facilitates the use of this process. The SnOx catalyzes the depolymerization of the PLA fibers at lower temperatures. The lactic acid monomers are gaseous at these temperatures and can be removed from the embedded matrix at temperatures that do not damage the matrix. Here we show a method for aligning these fibers using micromachined plates and a tensioning device to create complex patterns of three-dimensionally arrayed microchannels. The process allows the exploration of virtually any arrangement of fiber topologies and structures.
Physics, Issue 81, Biomedical Engineering, Chemical Engineering, Silicone Elastomers, Micro-Electrical-Mechanical Systems, Biomimetic Materials, chemical processing (general), materials (general), heat exchangers (aerospace applications), mass transfer, Massive microfabrication, high surface area structures, 3-dimensional micro exchange devices, biomimetics
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (, a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Play Button
Modeling and Imaging 3-Dimensional Collective Cell Invasion
Authors: Rebecca W. Scott, Diane Crighton, Michael F. Olson.
Institutions: University of Strathclyde , The Beatson Institute for Cancer Research.
A defining characteristic of cancer malignancy is invasion and metastasis 1. In some cancers (e.g. glioma 2), local invasion into surrounding healthy tissue is the root cause of disease and death. For other cancers (e.g. breast, lung, etc.), it is the process of metastasis, in which tumor cells move from a primary tumor mass, colonize distal sites and ultimately contribute to organ failure, that eventually leads to morbidity and mortality 3. It has been estimated that invasion and metastasis are responsible for 90% of cancer deaths 4. As a result, there has been intense interest in identifying the molecular processes and critical protein mediators of invasion and metastasis for the purposes of improving diagnosis and treatment 5. A challenge for cancer scientists is to develop invasion assays that sufficiently resemble the in vivo situation to enable accurate disease modeling 6. Two-dimensional cell motility assays are only informative about one aspect of invasion and do not take into account extracellular matrix (ECM) protein remodeling which is also a critical element. Recently, research has refined our understanding of tumor cell invasion and revealed that individual cells may move by elongated or rounded modes 7. In addition, there has been greater appreciation of the contribution of collective invasion, in which cells invade in strands, sheets and clusters, particularly in highly differentiated tumors that maintain epithelial characteristics, to the spread of cancer 8. We present a refined method 9 for examining the contributions of candidate proteins to collective invasion 10. In particular, by engineering separate pools of cells to express different fluorescent proteins, it is possible to molecularly dissect the activities and proteins required in leading cells versus those required in following cells. The use of RNAi provides the molecular tool to experimentally disassemble the processes involved in individual cell invasion as well as in different positions of collective invasion. In this procedure, mixtures of fluorescently-labeled cells are plated on the bottom of a Transwell insert previously filled with Matrigel ECM protein, then allowed to invade "upwards" through the filter and into the Matrigel. Reconstruction of z-series image stacks, obtained by confocal imaging, into three-dimensional representations allows for visualization of collectively invading strands and analysis of the representation of fluorescently-labeled cells in leading versus following positions.
Medicine, Issue 58, cancer, cell invasion, imaging, retroviral labeling, RNAi, 3D, Matrix, Matrigel, ECM
Play Button
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Authors: Viktor Martyanov, Robert H. Gross.
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1. In this article, we utilize a web version of SCOPE2 to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4 and has been used in other studies5-8. The three algorithms that comprise SCOPE are BEAM9, which finds non-degenerate motifs (ACCGGT), PRISM10, which finds degenerate motifs (ASCGWT), and SPACER11, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well. Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor. Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run. Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11.
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Play Button
Electrophoretic Separation of Proteins
Authors: Bulbul Chakavarti, Deb Chakavarti.
Institutions: Keck Graduate Institute of Applied Life Sciences.
Electrophoresis is used to separate complex mixtures of proteins (e.g., from cells, subcellular fractions, column fractions, or immunoprecipitates), to investigate subunit compositions, and to verify homogeneity of protein samples. It can also serve to purify proteins for use in further applications. In polyacrylamide gel electrophoresis, proteins migrate in response to an electrical field through pores in a polyacrylamide gel matrix; pore size decreases with increasing acrylamide concentration. The combination of pore size and protein charge, size, and shape determines the migration rate of the protein. In this unit, the standard Laemmli method is described for discontinuous gel electrophoresis under denaturing conditions, i.e., in the presence of sodium dodecyl sulfate (SDS).
Basic Protocols, Issue 16, Current Protocols Wiley, Electrophoresis, Biochemistry, Protein Separage, Polyacrylamide Gel Electrophoresis, PAGE
Play Button
MALDI Sample Preparation: the Ultra Thin Layer Method
Authors: David Fenyo, Qingjun Wang, Jeffrey A. DeGrasse, Julio C. Padovan, Martine Cadene, Brian T. Chait.
Institutions: Rockefeller University.
This video demonstrates the preparation of an ultra-thin matrix/analyte layer for analyzing peptides and proteins by Matrix-Assisted Laser Desorption Ionization Mass Spectrometry (MALDI-MS) 1,2. The ultra-thin layer method involves the production of a substrate layer of matrix crystals (alpha-cyano-4-hydroxycinnamic acid) on the sample plate, which serves as a seeding ground for subsequent crystallization of a matrix/analyte mixture. Advantages of the ultra-thin layer method over other sample deposition approaches (e.g. dried droplet) are that it provides (i) greater tolerance to impurities such as salts and detergents, (ii) better resolution, and (iii) higher spatial uniformity. This method is especially useful for the accurate mass determination of proteins. The protocol was initially developed and optimized for the analysis of membrane proteins and used to successfully analyze ion channels, metabolite transporters, and receptors, containing between 2 and 12 transmembrane domains 2. Since the original publication, it has also shown to be equally useful for the analysis of soluble proteins. Indeed, we have used it for a large number of proteins having a wide range of properties, including those with molecular masses as high as 380 kDa 3. It is currently our method of choice for the molecular mass analysis of all proteins. The described procedure consistently produces high-quality spectra, and it is sensitive, robust, and easy to implement.
Cellular Biology, Issue 3, mass-spectrometry, ultra-thin layer, MALDI, MS, proteins
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.