JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
A combinatorial model of malware diffusion via bluetooth connections.
PLoS ONE
PUBLISHED: 02-18-2013
We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.
Authors: William R. Brant, Siegbert Schmid, Guodong Du, Helen E. A. Brand, Wei Kong Pang, Vanessa K. Peterson, Zaiping Guo, Neeraj Sharma.
Published: 11-10-2014
ABSTRACT
Li-ion batteries are widely used in portable electronic devices and are considered as promising candidates for higher-energy applications such as electric vehicles.1,2 However, many challenges, such as energy density and battery lifetimes, need to be overcome before this particular battery technology can be widely implemented in such applications.3 This research is challenging, and we outline a method to address these challenges using in situ NPD to probe the crystal structure of electrodes undergoing electrochemical cycling (charge/discharge) in a battery. NPD data help determine the underlying structural mechanism responsible for a range of electrode properties, and this information can direct the development of better electrodes and batteries. We briefly review six types of battery designs custom-made for NPD experiments and detail the method to construct the ‘roll-over’ cell that we have successfully used on the high-intensity NPD instrument, WOMBAT, at the Australian Nuclear Science and Technology Organisation (ANSTO). The design considerations and materials used for cell construction are discussed in conjunction with aspects of the actual in situ NPD experiment and initial directions are presented on how to analyze such complex in situ data.
27 Related JoVE Articles!
Play Button
Simultaneous Scalp Electroencephalography (EEG), Electromyography (EMG), and Whole-body Segmental Inertial Recording for Multi-modal Neural Decoding
Authors: Thomas C. Bulea, Atilla Kilicarslan, Recep Ozdemir, William H. Paloski, Jose L. Contreras-Vidal.
Institutions: National Institutes of Health, University of Houston, University of Houston, University of Houston, University of Houston.
Recent studies support the involvement of supraspinal networks in control of bipedal human walking. Part of this evidence encompasses studies, including our previous work, demonstrating that gait kinematics and limb coordination during treadmill walking can be inferred from the scalp electroencephalogram (EEG) with reasonably high decoding accuracies. These results provide impetus for development of non-invasive brain-machine-interface (BMI) systems for use in restoration and/or augmentation of gait- a primary goal of rehabilitation research. To date, studies examining EEG decoding of activity during gait have been limited to treadmill walking in a controlled environment. However, to be practically viable a BMI system must be applicable for use in everyday locomotor tasks such as over ground walking and turning. Here, we present a novel protocol for non-invasive collection of brain activity (EEG), muscle activity (electromyography (EMG)), and whole-body kinematic data (head, torso, and limb trajectories) during both treadmill and over ground walking tasks. By collecting these data in the uncontrolled environment insight can be gained regarding the feasibility of decoding unconstrained gait and surface EMG from scalp EEG.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Biomedical Engineering, Molecular Biology, Electroencephalography, EEG, Electromyography, EMG, electroencephalograph, gait, brain-computer interface, brain machine interface, neural decoding, over-ground walking, robotic gait, brain, imaging, clinical techniques
50602
Play Button
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Authors: Nikki M. Curthoys, Michael J. Mlodzianoski, Dahan Kim, Samuel T. Hess.
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
50680
Play Button
The Xenopus Oocyte Cut-open Vaseline Gap Voltage-clamp Technique With Fluorometry
Authors: Michael W. Rudokas, Zoltan Varga, Angela R. Schubert, Alexandra B. Asaro, Jonathan R. Silva.
Institutions: Washington University in St. Louis.
The cut-open oocyte Vaseline gap (COVG) voltage clamp technique allows for analysis of electrophysiological and kinetic properties of heterologous ion channels in oocytes. Recordings from the cut-open setup are particularly useful for resolving low magnitude gating currents, rapid ionic current activation, and deactivation. The main benefits over the two-electrode voltage clamp (TEVC) technique include increased clamp speed, improved signal-to-noise ratio, and the ability to modulate the intracellular and extracellular milieu. Here, we employ the human cardiac sodium channel (hNaV1.5), expressed in Xenopus oocytes, to demonstrate the cut-open setup and protocol as well as modifications that are required to add voltage clamp fluorometry capability. The properties of fast activating ion channels, such as hNaV1.5, cannot be fully resolved near room temperature using TEVC, in which the entirety of the oocyte membrane is clamped, making voltage control difficult. However, in the cut-open technique, isolation of only a small portion of the cell membrane allows for the rapid clamping required to accurately record fast kinetics while preventing channel run-down associated with patch clamp techniques. In conjunction with the COVG technique, ion channel kinetics and electrophysiological properties can be further assayed by using voltage clamp fluorometry, where protein motion is tracked via cysteine conjugation of extracellularly applied fluorophores, insertion of genetically encoded fluorescent proteins, or the incorporation of unnatural amino acids into the region of interest1. This additional data yields kinetic information about voltage-dependent conformational rearrangements of the protein via changes in the microenvironment surrounding the fluorescent molecule.
Developmental Biology, Issue 85, Voltage clamp, Cut-open, Oocyte, Voltage Clamp Fluorometry, Sodium Channels, Ionic Currents, Xenopus laevis
51040
Play Button
Easy Measurement of Diffusion Coefficients of EGFP-tagged Plasma Membrane Proteins Using k-Space Image Correlation Spectroscopy
Authors: Eva C. Arnspang, Jennifer S. Koffman, Saw Marlar, Paul W. Wiseman, Lene N. Nejsum.
Institutions: Aarhus University, McGill University.
Lateral diffusion and compartmentalization of plasma membrane proteins are tightly regulated in cells and thus, studying these processes will reveal new insights to plasma membrane protein function and regulation. Recently, k-Space Image Correlation Spectroscopy (kICS)1 was developed to enable routine measurements of diffusion coefficients directly from images of fluorescently tagged plasma membrane proteins, that avoided systematic biases introduced by probe photophysics. Although the theoretical basis for the analysis is complex, the method can be implemented by nonexperts using a freely available code to measure diffusion coefficients of proteins. kICS calculates a time correlation function from a fluorescence microscopy image stack after Fourier transformation of each image to reciprocal (k-) space. Subsequently, circular averaging, natural logarithm transform and linear fits to the correlation function yields the diffusion coefficient. This paper provides a step-by-step guide to the image analysis and measurement of diffusion coefficients via kICS. First, a high frame rate image sequence of a fluorescently labeled plasma membrane protein is acquired using a fluorescence microscope. Then, a region of interest (ROI) avoiding intracellular organelles, moving vesicles or protruding membrane regions is selected. The ROI stack is imported into a freely available code and several defined parameters (see Method section) are set for kICS analysis. The program then generates a "slope of slopes" plot from the k-space time correlation functions, and the diffusion coefficient is calculated from the slope of the plot. Below is a step-by-step kICS procedure to measure the diffusion coefficient of a membrane protein using the renal water channel aquaporin-3 tagged with EGFP as a canonical example.
Biophysics, Issue 87, Amino Acids, Peptides and Proteins, Computer Programming and Software, Diffusion coefficient, Aquaporin-3, k-Space Image Correlation Spectroscopy, Analysis
51074
Play Button
Generation of Myospheres From hESCs by Epigenetic Reprogramming
Authors: Sonia Albini, Pier Lorenzo Puri.
Institutions: Sanford-Burnham Institute for Medical Research, IRCCS Fondazione Santa Lucia.
Generation of a homogeneous and abundant population of skeletal muscle cells from human embryonic stem cells (hESCs) is a requirement for cell-based therapies and for a "disease in a dish" model of human neuromuscular diseases. Major hurdles, such as low abundance and heterogeneity of the population of interest, as well as a lack of protocols for the formation of three-dimensional contractile structures, have limited the applications of stem cells for neuromuscular disorders. We have designed a protocol that overcomes these limits by ectopic introduction of defined factors in hESCs - the muscle determination factor MyoD and SWI/SNF chromatin remodeling complex component BAF60C - that are able to reprogram hESCs into skeletal muscle cells. Here we describe the protocol established to generate hESC-derived myoblasts and promote their clustering into tridimensional miniaturized structures (myospheres) that functionally mimic miniaturized skeletal muscles7.
Bioengineering, Issue 88, Tissues, Cells, Embryonic Structures, Musculoskeletal System, Musculoskeletal Diseases, hESC, epinegetics, Skeletal Myogenesis, Myosphere, Chromatin, Lentivirus, Infection
51243
Play Button
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Authors: Daniel T. Claiborne, Jessica L. Prince, Eric Hunter.
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro replication of HIV-1 as influenced by the gag gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro replication of chronically derived gag-pro sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Analysis of Tubular Membrane Networks in Cardiac Myocytes from Atria and Ventricles
Authors: Eva Wagner, Sören Brandenburg, Tobias Kohl, Stephan E. Lehnart.
Institutions: Heart Research Center Goettingen, University Medical Center Goettingen, German Center for Cardiovascular Research (DZHK) partner site Goettingen, University of Maryland School of Medicine.
In cardiac myocytes a complex network of membrane tubules - the transverse-axial tubule system (TATS) - controls deep intracellular signaling functions. While the outer surface membrane and associated TATS membrane components appear to be continuous, there are substantial differences in lipid and protein content. In ventricular myocytes (VMs), certain TATS components are highly abundant contributing to rectilinear tubule networks and regular branching 3D architectures. It is thought that peripheral TATS components propagate action potentials from the cell surface to thousands of remote intracellular sarcoendoplasmic reticulum (SER) membrane contact domains, thereby activating intracellular Ca2+ release units (CRUs). In contrast to VMs, the organization and functional role of TATS membranes in atrial myocytes (AMs) is significantly different and much less understood. Taken together, quantitative structural characterization of TATS membrane networks in healthy and diseased myocytes is an essential prerequisite towards better understanding of functional plasticity and pathophysiological reorganization. Here, we present a strategic combination of protocols for direct quantitative analysis of TATS membrane networks in living VMs and AMs. For this, we accompany primary cell isolations of mouse VMs and/or AMs with critical quality control steps and direct membrane staining protocols for fluorescence imaging of TATS membranes. Using an optimized workflow for confocal or superresolution TATS image processing, binarized and skeletonized data are generated for quantitative analysis of the TATS network and its components. Unlike previously published indirect regional aggregate image analysis strategies, our protocols enable direct characterization of specific components and derive complex physiological properties of TATS membrane networks in living myocytes with high throughput and open access software tools. In summary, the combined protocol strategy can be readily applied for quantitative TATS network studies during physiological myocyte adaptation or disease changes, comparison of different cardiac or skeletal muscle cell types, phenotyping of transgenic models, and pharmacological or therapeutic interventions.
Bioengineering, Issue 92, cardiac myocyte, atria, ventricle, heart, primary cell isolation, fluorescence microscopy, membrane tubule, transverse-axial tubule system, image analysis, image processing, T-tubule, collagenase
51823
Play Button
Combining Magnetic Sorting of Mother Cells and Fluctuation Tests to Analyze Genome Instability During Mitotic Cell Aging in Saccharomyces cerevisiae
Authors: Melissa N. Patterson, Patrick H. Maxwell.
Institutions: Rensselaer Polytechnic Institute.
Saccharomyces cerevisiae has been an excellent model system for examining mechanisms and consequences of genome instability. Information gained from this yeast model is relevant to many organisms, including humans, since DNA repair and DNA damage response factors are well conserved across diverse species. However, S. cerevisiae has not yet been used to fully address whether the rate of accumulating mutations changes with increasing replicative (mitotic) age due to technical constraints. For instance, measurements of yeast replicative lifespan through micromanipulation involve very small populations of cells, which prohibit detection of rare mutations. Genetic methods to enrich for mother cells in populations by inducing death of daughter cells have been developed, but population sizes are still limited by the frequency with which random mutations that compromise the selection systems occur. The current protocol takes advantage of magnetic sorting of surface-labeled yeast mother cells to obtain large enough populations of aging mother cells to quantify rare mutations through phenotypic selections. Mutation rates, measured through fluctuation tests, and mutation frequencies are first established for young cells and used to predict the frequency of mutations in mother cells of various replicative ages. Mutation frequencies are then determined for sorted mother cells, and the age of the mother cells is determined using flow cytometry by staining with a fluorescent reagent that detects bud scars formed on their cell surfaces during cell division. Comparison of predicted mutation frequencies based on the number of cell divisions to the frequencies experimentally observed for mother cells of a given replicative age can then identify whether there are age-related changes in the rate of accumulating mutations. Variations of this basic protocol provide the means to investigate the influence of alterations in specific gene functions or specific environmental conditions on mutation accumulation to address mechanisms underlying genome instability during replicative aging.
Microbiology, Issue 92, Aging, mutations, genome instability, Saccharomyces cerevisiae, fluctuation test, magnetic sorting, mother cell, replicative aging
51850
Play Button
DTI of the Visual Pathway - White Matter Tracts and Cerebral Lesions
Authors: Ardian Hana, Andreas Husch, Vimal Raj Nitish Gunness, Christophe Berthold, Anisa Hana, Georges Dooms, Hans Boecher Schwarz, Frank Hertel.
Institutions: Centre Hospitalier de Luxembourg, University of Applied Sciences Trier, Erasmus Universiteit Rotterdam, Centre Hospitalier de Luxembourg.
DTI is a technique that identifies white matter tracts (WMT) non-invasively in healthy and non-healthy patients using diffusion measurements. Similar to visual pathways (VP), WMT are not visible with classical MRI or intra-operatively with microscope. DTI will help neurosurgeons to prevent destruction of the VP while removing lesions adjacent to this WMT. We have performed DTI on fifty patients before and after surgery between March 2012 to January 2014. To navigate we used a 3DT1-weighted sequence. Additionally, we performed a T2-weighted and DTI-sequences. The parameters used were, FOV: 200 x 200 mm, slice thickness: 2 mm, and acquisition matrix: 96 x 96 yielding nearly isotropic voxels of 2 x 2 x 2 mm. Axial MRI was carried out using a 32 gradient direction and one b0-image. We used Echo-Planar-Imaging (EPI) and ASSET parallel imaging with an acceleration factor of 2 and b-value of 800 s/mm². The scanning time was less than 9 min. The DTI-data obtained were processed using a FDA approved surgical navigation system program which uses a straightforward fiber-tracking approach known as fiber assignment by continuous tracking (FACT). This is based on the propagation of lines between regions of interest (ROI) which is defined by a physician. A maximum angle of 50, FA start value of 0.10 and ADC stop value of 0.20 mm²/s were the parameters used for tractography. There are some limitations to this technique. The limited acquisition time frame enforces trade-offs in the image quality. Another important point not to be neglected is the brain shift during surgery. As for the latter intra-operative MRI might be helpful. Furthermore the risk of false positive or false negative tracts needs to be taken into account which might compromise the final results.
Medicine, Issue 90, Neurosurgery, brain, visual pathway, white matter tracts, visual cortex, optic chiasm, glioblastoma, meningioma, metastasis
51946
Play Button
From Fast Fluorescence Imaging to Molecular Diffusion Law on Live Cell Membranes in a Commercial Microscope
Authors: Carmine Di Rienzo, Enrico Gratton, Fabio Beltram, Francesco Cardarelli.
Institutions: Scuola Normale Superiore, Instituto Italiano di Tecnologia, University of California, Irvine.
It has become increasingly evident that the spatial distribution and the motion of membrane components like lipids and proteins are key factors in the regulation of many cellular functions. However, due to the fast dynamics and the tiny structures involved, a very high spatio-temporal resolution is required to catch the real behavior of molecules. Here we present the experimental protocol for studying the dynamics of fluorescently-labeled plasma-membrane proteins and lipids in live cells with high spatiotemporal resolution. Notably, this approach doesn’t need to track each molecule, but it calculates population behavior using all molecules in a given region of the membrane. The starting point is a fast imaging of a given region on the membrane. Afterwards, a complete spatio-temporal autocorrelation function is calculated correlating acquired images at increasing time delays, for example each 2, 3, n repetitions. It is possible to demonstrate that the width of the peak of the spatial autocorrelation function increases at increasing time delay as a function of particle movement due to diffusion. Therefore, fitting of the series of autocorrelation functions enables to extract the actual protein mean square displacement from imaging (iMSD), here presented in the form of apparent diffusivity vs average displacement. This yields a quantitative view of the average dynamics of single molecules with nanometer accuracy. By using a GFP-tagged variant of the Transferrin Receptor (TfR) and an ATTO488 labeled 1-palmitoyl-2-hydroxy-sn-glycero-3-phosphoethanolamine (PPE) it is possible to observe the spatiotemporal regulation of protein and lipid diffusion on µm-sized membrane regions in the micro-to-milli-second time range.
Bioengineering, Issue 92, fluorescence, protein dynamics, lipid dynamics, membrane heterogeneity, transient confinement, single molecule, GFP
51994
Play Button
Improving the Success Rate of Protein Crystallization by Random Microseed Matrix Screening
Authors: Marisa Till, Alice Robson, Matthew J. Byrne, Asha V. Nair, Stefan A. Kolek, Patrick D. Shaw Stewart, Paul R. Race.
Institutions: University of Bristol, Douglas Instruments.
Random microseed matrix screening (rMMS) is a protein crystallization technique in which seed crystals are added to random screens. By increasing the likelihood that crystals will grow in the metastable zone of a protein's phase diagram, extra crystallization leads are often obtained, the quality of crystals produced may be increased, and a good supply of crystals for data collection and soaking experiments is provided. Here we describe a general method for rMMS that may be applied to either sitting drop or hanging drop vapor diffusion experiments, established either by hand or using liquid handling robotics, in 96-well or 24-well tray format.
Structural Biology, Issue 78, Crystallography, X-Ray, Biochemical Phenomena, Molecular Structure, Molecular Conformation, protein crystallization, seeding, protein structure
50548
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Microfluidic Co-culture of Epithelial Cells and Bacteria for Investigating Soluble Signal-mediated Interactions
Authors: Jeongyun Kim, Manjunath Hegde, Arul Jayaraman.
Institutions: Texas A&M University, Texas A&M University.
The human gastrointestinal (GI) tract is a unique environment in which intestinal epithelial cells and non-pathogenic (commensal) bacteria coexist. It has been proposed that the microenvironment that the pathogen encounters in the commensal layer is important in determining the extent of colonization. Current culture methods for investigating pathogen colonization are not well suited for investigating this hypothesis as they do not enable co-culture of bacteria and epithelial cells in a manner that mimics the GI tract microenvironment. Here we describe a microfluidic co-culture model that enables independent culture of eukaryotic cells and bacteria, and testing the effect of the commensal microenvironment on pathogen colonization. The co-culture model is demonstrated by developing a commensal Escherichia coli biofilm among HeLa cells, followed by introduction of enterohemorrhagic E. coli (EHEC) into the commensal island, in a sequence that mimics the sequence of events in GI tract infection.
Microbiology, Issue 38, Host pathogen interactions, probiotics, inter-kingdom signaling
1749
Play Button
Determination of Mammalian Cell Counts, Cell Size and Cell Health Using the Moxi Z Mini Automated Cell Counter
Authors: Gregory M. Dittami, Manju Sethi, Richard D. Rabbitt, H. Edward Ayliffe.
Institutions: Orflo Technologies, University of Utah .
Particle and cell counting is used for a variety of applications including routine cell culture, hematological analysis, and industrial controls1-5. A critical breakthrough in cell/particle counting technologies was the development of the Coulter technique by Wallace Coulter over 50 years ago. The technique involves the application of an electric field across a micron-sized aperture and hydrodynamically focusing single particles through the aperture. The resulting occlusion of the aperture by the particles yields a measurable change in electric impedance that can be directly and precisely correlated to cell size/volume. The recognition of the approach as the benchmark in cell/particle counting stems from the extraordinary precision and accuracy of its particle sizing and counts, particularly as compared to manual and imaging based technologies (accuracies on the order of 98% for Coulter counters versus 75-80% for manual and vision-based systems). This can be attributed to the fact that, unlike imaging-based approaches to cell counting, the Coulter Technique makes a true three-dimensional (3-D) measurement of cells/particles which dramatically reduces count interference from debris and clustering by calculating precise volumetric information about the cells/particles. Overall this provides a means for enumerating and sizing cells in a more accurate, less tedious, less time-consuming, and less subjective means than other counting techniques6. Despite the prominence of the Coulter technique in cell counting, its widespread use in routine biological studies has been prohibitive due to the cost and size of traditional instruments. Although a less expensive Coulter-based instrument has been produced, it has limitations as compared to its more expensive counterparts in the correction for "coincidence events" in which two or more cells pass through the aperture and are measured simultaneously. Another limitation with existing Coulter technologies is the lack of metrics on the overall health of cell samples. Consequently, additional techniques must often be used in conjunction with Coulter counting to assess cell viability. This extends experimental setup time and cost since the traditional methods of viability assessment require cell staining and/or use of expensive and cumbersome equipment such as a flow cytometer. The Moxi Z mini automated cell counter, described here, is an ultra-small benchtop instrument that combines the accuracy of the Coulter Principle with a thin-film sensor technology to enable precise sizing and counting of particles ranging from 3-25 microns, depending on the cell counting cassette used. The M type cassette can be used to count particles from with average diameters of 4 - 25 microns (dynamic range 2 - 34 microns), and the Type S cassette can be used to count particles with and average diameter of 3 - 20 microns (dynamic range 2 - 26 microns). Since the system uses a volumetric measurement method, the 4-25 microns corresponds to a cell volume range of 34 - 8,180 fL and the 3 - 20 microns corresponds to a cell volume range of 14 - 4200 fL, which is relevant when non-spherical particles are being measured. To perform mammalian cell counts using the Moxi Z, the cells to be counted are first diluted with ORFLO or similar diluent. A cell counting cassette is inserted into the instrument, and the sample is loaded into the port of the cassette. Thousands of cells are pulled, single-file through a "Cell Sensing Zone" (CSZ) in the thin-film membrane over 8-15 seconds. Following the run, the instrument uses proprietary curve-fitting in conjunction with a proprietary software algorithm to provide coincidence event correction along with an assessment of overall culture health by determining the ratio of the number of cells in the population of interest to the total number of particles. The total particle counts include shrunken and broken down dead cells, as well as other debris and contaminants. The results are presented in histogram format with an automatic curve fit, with gates that can be adjusted manually as needed. Ultimately, the Moxi Z enables counting with a precision and accuracy comparable to a Coulter Z2, the current gold standard, while providing additional culture health information. Furthermore it achieves these results in less time, with a smaller footprint, with significantly easier operation and maintenance, and at a fraction of the cost of comparable technologies.
Cellular Biology, Issue 64, Molecular Biology, cell counting, coulter counting, cell culture health assessment, particle sizing, mammalian cells, Moxi Z
3842
Play Button
Patient-specific Modeling of the Heart: Estimation of Ventricular Fiber Orientations
Authors: Fijoy Vadakkumpadan, Hermenegild Arevalo, Natalia A. Trayanova.
Institutions: Johns Hopkins University.
Patient-specific simulations of heart (dys)function aimed at personalizing cardiac therapy are hampered by the absence of in vivo imaging technology for clinically acquiring myocardial fiber orientations. The objective of this project was to develop a methodology to estimate cardiac fiber orientations from in vivo images of patient heart geometries. An accurate representation of ventricular geometry and fiber orientations was reconstructed, respectively, from high-resolution ex vivo structural magnetic resonance (MR) and diffusion tensor (DT) MR images of a normal human heart, referred to as the atlas. Ventricular geometry of a patient heart was extracted, via semiautomatic segmentation, from an in vivo computed tomography (CT) image. Using image transformation algorithms, the atlas ventricular geometry was deformed to match that of the patient. Finally, the deformation field was applied to the atlas fiber orientations to obtain an estimate of patient fiber orientations. The accuracy of the fiber estimates was assessed using six normal and three failing canine hearts. The mean absolute difference between inclination angles of acquired and estimated fiber orientations was 15.4 °. Computational simulations of ventricular activation maps and pseudo-ECGs in sinus rhythm and ventricular tachycardia indicated that there are no significant differences between estimated and acquired fiber orientations at a clinically observable level.The new insights obtained from the project will pave the way for the development of patient-specific models of the heart that can aid physicians in personalized diagnosis and decisions regarding electrophysiological interventions.
Bioengineering, Issue 71, Biomedical Engineering, Medicine, Anatomy, Physiology, Cardiology, Myocytes, Cardiac, Image Processing, Computer-Assisted, Magnetic Resonance Imaging, MRI, Diffusion Magnetic Resonance Imaging, Cardiac Electrophysiology, computerized simulation (general), mathematical modeling (systems analysis), Cardiomyocyte, biomedical image processing, patient-specific modeling, Electrophysiology, simulation
50125
Play Button
Metabolic Labeling of Newly Transcribed RNA for High Resolution Gene Expression Profiling of RNA Synthesis, Processing and Decay in Cell Culture
Authors: Bernd Rädle, Andrzej J. Rutkowski, Zsolt Ruzsics, Caroline C. Friedel, Ulrich H. Koszinowski, Lars Dölken.
Institutions: Max von Pettenkofer Institute, University of Cambridge, Ludwig-Maximilians-University Munich.
The development of whole-transcriptome microarrays and next-generation sequencing has revolutionized our understanding of the complexity of cellular gene expression. Along with a better understanding of the involved molecular mechanisms, precise measurements of the underlying kinetics have become increasingly important. Here, these powerful methodologies face major limitations due to intrinsic properties of the template samples they study, i.e. total cellular RNA. In many cases changes in total cellular RNA occur either too slowly or too quickly to represent the underlying molecular events and their kinetics with sufficient resolution. In addition, the contribution of alterations in RNA synthesis, processing, and decay are not readily differentiated. We recently developed high-resolution gene expression profiling to overcome these limitations. Our approach is based on metabolic labeling of newly transcribed RNA with 4-thiouridine (thus also referred to as 4sU-tagging) followed by rigorous purification of newly transcribed RNA using thiol-specific biotinylation and streptavidin-coated magnetic beads. It is applicable to a broad range of organisms including vertebrates, Drosophila, and yeast. We successfully applied 4sU-tagging to study real-time kinetics of transcription factor activities, provide precise measurements of RNA half-lives, and obtain novel insights into the kinetics of RNA processing. Finally, computational modeling can be employed to generate an integrated, comprehensive analysis of the underlying molecular mechanisms.
Genetics, Issue 78, Cellular Biology, Molecular Biology, Microbiology, Biochemistry, Eukaryota, Investigative Techniques, Biological Phenomena, Gene expression profiling, RNA synthesis, RNA processing, RNA decay, 4-thiouridine, 4sU-tagging, microarray analysis, RNA-seq, RNA, DNA, PCR, sequencing
50195
Play Button
Simultaneous EEG Monitoring During Transcranial Direct Current Stimulation
Authors: Pedro Schestatsky, Leon Morales-Quezada, Felipe Fregni.
Institutions: Universidade Federal do Rio Grande do Sul, Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), Harvard Medical School, De Montfort University.
Transcranial direct current stimulation (tDCS) is a technique that delivers weak electric currents through the scalp. This constant electric current induces shifts in neuronal membrane excitability, resulting in secondary changes in cortical activity. Although tDCS has most of its neuromodulatory effects on the underlying cortex, tDCS effects can also be observed in distant neural networks. Therefore, concomitant EEG monitoring of the effects of tDCS can provide valuable information on the mechanisms of tDCS. In addition, EEG findings can be an important surrogate marker for the effects of tDCS and thus can be used to optimize its parameters. This combined EEG-tDCS system can also be used for preventive treatment of neurological conditions characterized by abnormal peaks of cortical excitability, such as seizures. Such a system would be the basis of a non-invasive closed-loop device. In this article, we present a novel device that is capable of utilizing tDCS and EEG simultaneously. For that, we describe in a step-by-step fashion the main procedures of the application of this device using schematic figures, tables and video demonstrations. Additionally, we provide a literature review on clinical uses of tDCS and its cortical effects measured by EEG techniques.
Behavior, Issue 76, Medicine, Neuroscience, Neurobiology, Anatomy, Physiology, Biomedical Engineering, Psychology, electroencephalography, electroencephalogram, EEG, transcranial direct current stimulation, tDCS, noninvasive brain stimulation, neuromodulation, closed-loop system, brain, imaging, clinical techniques
50426
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Reconstitution of a Kv Channel into Lipid Membranes for Structural and Functional Studies
Authors: Sungsoo Lee, Hui Zheng, Liang Shi, Qiu-Xing Jiang.
Institutions: University of Texas Southwestern Medical Center at Dallas.
To study the lipid-protein interaction in a reductionistic fashion, it is necessary to incorporate the membrane proteins into membranes of well-defined lipid composition. We are studying the lipid-dependent gating effects in a prototype voltage-gated potassium (Kv) channel, and have worked out detailed procedures to reconstitute the channels into different membrane systems. Our reconstitution procedures take consideration of both detergent-induced fusion of vesicles and the fusion of protein/detergent micelles with the lipid/detergent mixed micelles as well as the importance of reaching an equilibrium distribution of lipids among the protein/detergent/lipid and the detergent/lipid mixed micelles. Our data suggested that the insertion of the channels in the lipid vesicles is relatively random in orientations, and the reconstitution efficiency is so high that no detectable protein aggregates were seen in fractionation experiments. We have utilized the reconstituted channels to determine the conformational states of the channels in different lipids, record electrical activities of a small number of channels incorporated in planar lipid bilayers, screen for conformation-specific ligands from a phage-displayed peptide library, and support the growth of 2D crystals of the channels in membranes. The reconstitution procedures described here may be adapted for studying other membrane proteins in lipid bilayers, especially for the investigation of the lipid effects on the eukaryotic voltage-gated ion channels.
Molecular Biology, Issue 77, Biochemistry, Genetics, Cellular Biology, Structural Biology, Biophysics, Membrane Lipids, Phospholipids, Carrier Proteins, Membrane Proteins, Micelles, Molecular Motor Proteins, life sciences, biochemistry, Amino Acids, Peptides, and Proteins, lipid-protein interaction, channel reconstitution, lipid-dependent gating, voltage-gated ion channel, conformation-specific ligands, lipids
50436
Play Button
Measuring Diffusion Coefficients via Two-photon Fluorescence Recovery After Photobleaching
Authors: Kelley D. Sullivan, Edward B. Brown.
Institutions: University of Rochester, University of Rochester.
Multi-fluorescence recovery after photobleaching is a microscopy technique used to measure the diffusion coefficient (or analogous transport parameters) of macromolecules, and can be applied to both in vitro and in vivo biological systems. Multi-fluorescence recovery after photobleaching is performed by photobleaching a region of interest within a fluorescent sample using an intense laser flash, then attenuating the beam and monitoring the fluorescence as still-fluorescent molecules from outside the region of interest diffuse in to replace the photobleached molecules. We will begin our demonstration by aligning the laser beam through the Pockels Cell (laser modulator) and along the optical path through the laser scan box and objective lens to the sample. For simplicity, we will use a sample of aqueous fluorescent dye. We will then determine the proper experimental parameters for our sample including, monitor and bleaching powers, bleach duration, bin widths (for photon counting), and fluorescence recovery time. Next, we will describe the procedure for taking recovery curves, a process that can be largely automated via LabVIEW (National Instruments, Austin, TX) for enhanced throughput. Finally, the diffusion coefficient is determined by fitting the recovery data to the appropriate mathematical model using a least-squares fitting algorithm, readily programmable using software such as MATLAB (The Mathworks, Natick, MA).
Cellular Biology, Issue 36, Diffusion, fluorescence recovery after photobleaching, MP-FRAP, FPR, multi-photon
1636
Play Button
Preparation of Artificial Bilayers for Electrophysiology Experiments
Authors: Ruchi Kapoor, Jung H. Kim, Helgi Ingolfson, Olaf Sparre Andersen.
Institutions: Weill Cornell Medical College of Cornell University.
Planar lipid bilayers, also called artificial lipid bilayers, allow you to study ion-conducting channels in a well-defined environment. These bilayers can be used for many different studies, such as the characterization of membrane-active peptides, the reconstitution of ion channels or investigations on how changes in lipid bilayer properties alter the function of bilayer-spanning channels. Here, we show how to form a planar bilayer and how to isolate small patches from the bilayer, and in a second video will also demonstrate a procedure for using gramicidin channels to determine changes in lipid bilayer elastic properties. We also demonstrate the individual steps needed to prepare the bilayer chamber, the electrodes and how to test that the bilayer is suitable for single-channel measurements.
Cellular Biology, Issue 20, Springer Protocols, Artificial Bilayers, Bilayer Patch Experiments, Lipid Bilayers, Bilayer Punch Electrodes, Electrophysiology
1033
Play Button
Protocols for Oral Infection of Lepidopteran Larvae with Baculovirus
Authors: Wendy Sparks, Huarong Li, Bryony Bonning.
Institutions: Iowa State University.
Baculoviruses are widely used both as protein expression vectors and as insect pest control agents. This video shows how lepidopteran larvae can be infected with polyhedra by droplet feeding and diet plug-based bioassays. This accompanying Springer Protocols section provides an overview of the baculovirus lifecycle and use of baculoviruses as insecticidal agents, including discussion of the pros and cons for use of baculoviruses as insecticides, and progress made in genetic enhancement of baculoviruses for improved insecticidal efficacy.
Plant Biology, Issue 19, Springer Protocols, Baculovirus insecticides, recombinant baculovirus, insect pest management
888
Play Button
Interview: HIV-1 Proviral DNA Excision Using an Evolved Recombinase
Authors: Joachim Hauber.
Institutions: Heinrich-Pette-Institute for Experimental Virology and Immunology, University of Hamburg.
HIV-1 integrates into the host chromosome of infected cells and persists as a provirus flanked by long terminal repeats. Current treatment strategies primarily target virus enzymes or virus-cell fusion, suppressing the viral life cycle without eradicating the infection. Since the integrated provirus is not targeted by these approaches, new resistant strains of HIV-1 may emerge. Here, we report that the engineered recombinase Tre (see Molecular evolution of the Tre recombinase , Buchholz, F., Max Planck Institute for Cell Biology and Genetics, Dresden) efficiently excises integrated HIV-1 proviral DNA from the genome of infected cells. We produced loxLTR containing viral pseudotypes and infected HeLa cells to examine whether Tre recombinase can excise the provirus from the genome of HIV-1 infected human cells. A virus particle-releasing cell line was cloned and transfected with a plasmid expressing Tre or with a parental control vector. Recombinase activity and virus production were monitored. All assays demonstrated the efficient deletion of the provirus from infected cells without visible cytotoxic effects. These results serve as proof of principle that it is possible to evolve a recombinase to specifically target an HIV-1 LTR and that this recombinase is capable of excising the HIV-1 provirus from the genome of HIV-1-infected human cells. Before an engineered recombinase could enter the therapeutic arena, however, significant obstacles need to be overcome. Among the most critical issues, that we face, are an efficient and safe delivery to targeted cells and the absence of side effects.
Medicine, Issue 16, HIV, Cell Biology, Recombinase, provirus, HeLa Cells
793
Play Button
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Authors: John Marshall, Koji Morikawa, Nicholas Manoukis, Charles Taylor.
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease
227
Play Button
Wolbachia Bacterial Infection in Drosophila
Authors: Horacio Frydman.
Institutions: Boston University.
Developmental Biology, Issue 2, Drosophila, infection, fly
158
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.