The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
19 Related JoVE Articles!
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
A Microscopic Phenotypic Assay for the Quantification of Intracellular Mycobacteria Adapted for High-throughput/High-content Screening
Institutions: Université de Lille.
Despite the availability of therapy and vaccine, tuberculosis (TB) remains one of the most deadly and widespread bacterial infections in the world. Since several decades, the sudden burst of multi- and extensively-drug resistant strains is a serious threat for the control of tuberculosis. Therefore, it is essential to identify new targets and pathways critical for the causative agent of the tuberculosis, Mycobacterium tuberculosis
) and to search for novel chemicals that could become TB drugs. One approach is to set up methods suitable for the genetic and chemical screens of large scale libraries enabling the search of a needle in a haystack. To this end, we developed a phenotypic assay relying on the detection of fluorescently labeled Mtb
within fluorescently labeled host cells using automated confocal microscopy. This in vitro
assay allows an image based quantification of the colonization process of Mtb
into the host and was optimized for the 384-well microplate format, which is proper for screens of siRNA-, chemical compound- or Mtb
mutant-libraries. The images are then processed for multiparametric analysis, which provides read out inferring on the pathogenesis of Mtb
within host cells.
Infection, Issue 83, Mycobacterium tuberculosis, High-content/High-throughput screening, chemogenomics, Drug Discovery, siRNA library, automated confocal microscopy, image-based analysis
A Protocol for Phage Display and Affinity Selection Using Recombinant Protein Baits
Institutions: University of Kentucky .
Using recombinant phage as a scaffold to present various protein portions encoded by a directionally cloned cDNA library to immobilized bait molecules is an efficient means to discover interactions. The technique has largely been used to discover protein-protein interactions but the bait molecule to be challenged need not be restricted to proteins. The protocol presented here has been optimized to allow a modest number of baits to be screened in replicates to maximize the identification of independent clones presenting the same protein. This permits greater confidence that interacting proteins identified are legitimate interactors of the bait molecule. Monitoring the phage titer after each affinity selection round provides information on how the affinity selection is progressing as well as on the efficacy of negative controls. One means of titering the phage, and how and what to prepare in advance to allow this process to progress as efficiently as possible, is presented. Attributes of amplicons retrieved following isolation of independent plaque are highlighted that can be used to ascertain how well the affinity selection has progressed. Trouble shooting techniques to minimize false positives or to bypass persistently recovered phage are explained. Means of reducing viral contamination flare up are discussed.
Biochemistry, Issue 84, Affinity selection, Phage display, protein-protein interaction
Protein-protein Interactions Visualized by Bimolecular Fluorescence Complementation in Tobacco Protoplasts and Leaves
Institutions: Ludwig-Maximilians-Universität, München.
Many proteins interact transiently with other proteins or are integrated into multi-protein complexes to perform their biological function. Bimolecular fluorescence complementation (BiFC) is an in vivo
method to monitor such interactions in plant cells. In the presented protocol the investigated candidate proteins are fused to complementary halves of fluorescent proteins and the respective constructs are introduced into plant cells via agrobacterium-mediated transformation. Subsequently, the proteins are transiently expressed in tobacco leaves and the restored fluorescent signals can be detected with a confocal laser scanning microscope in the intact cells. This allows not only visualization of the interaction itself, but also the subcellular localization of the protein complexes can be determined. For this purpose, marker genes containing a fluorescent tag can be coexpressed along with the BiFC constructs, thus visualizing cellular structures such as the endoplasmic reticulum, mitochondria, the Golgi apparatus or the plasma membrane. The fluorescent signal can be monitored either directly in epidermal leaf cells or in single protoplasts, which can be easily isolated from the transformed tobacco leaves. BiFC is ideally suited to study protein-protein interactions in their natural surroundings within the living cell. However, it has to be considered that the expression has to be driven by strong promoters and that the interaction partners are modified due to fusion of the relatively large fluorescence tags, which might interfere with the interaction mechanism. Nevertheless, BiFC is an excellent complementary approach to other commonly applied methods investigating protein-protein interactions, such as coimmunoprecipitation, in vitro
pull-down assays or yeast-two-hybrid experiments.
Plant Biology, Issue 85, Tetratricopeptide repeat domain, chaperone, chloroplasts, endoplasmic reticulum, HSP90, Toc complex, Sec translocon, BiFC
High-resolution, High-speed, Three-dimensional Video Imaging with Digital Fringe Projection Techniques
Institutions: Iowa State University.
Digital fringe projection (DFP) techniques provide dense 3D measurements of dynamically changing surfaces. Like the human eyes and brain, DFP uses triangulation between matching points in two views of the same scene at different angles to compute depth. However, unlike a stereo-based method, DFP uses a digital video projector to replace one of the cameras1
. The projector rapidly projects a known sinusoidal pattern onto the subject, and the surface of the subject distorts these patterns in the camera’s field of view. Three distorted patterns (fringe images) from the camera can be used to compute the depth using triangulation.
Unlike other 3D measurement methods, DFP techniques lead to systems that tend to be faster, lower in equipment cost, more flexible, and easier to develop. DFP systems can also achieve the same measurement resolution as the camera. For this reason, DFP and other digital structured light techniques have recently been the focus of intense research (as summarized in1-5
). Taking advantage of DFP, the graphics processing unit, and optimized algorithms, we have developed a system capable of 30 Hz 3D video data acquisition, reconstruction, and display for over 300,000 measurement points per frame6,7
. Binary defocusing DFP methods can achieve even greater speeds8
Diverse applications can benefit from DFP techniques. Our collaborators have used our systems for facial function analysis9
, facial animation10
, cardiac mechanics studies11
, and fluid surface measurements, but many other potential applications exist. This video will teach the fundamentals of DFP techniques and illustrate the design and operation of a binary defocusing DFP system.
Physics, Issue 82, Structured light, Fringe projection, 3D imaging, 3D scanning, 3D video, binary defocusing, phase-shifting
Structure of HIV-1 Capsid Assemblies by Cryo-electron Microscopy and Iterative Helical Real-space Reconstruction
Institutions: University of Pittsburgh School of Medicine.
Cryo-electron microscopy (cryo-EM), combined with image processing, is an increasingly powerful tool for structure determination of macromolecular protein complexes and assemblies. In fact, single particle electron microscopy1
and two-dimensional (2D) electron crystallography2
have become relatively routine methodologies and a large number of structures have been solved using these methods. At the same time, image processing and three-dimensional (3D) reconstruction of helical objects has rapidly developed, especially, the iterative helical real-space reconstruction (IHRSR) method3
, which uses single particle analysis tools in conjunction with helical symmetry. Many biological entities function in filamentous or helical forms, including actin filaments4
, amyloid fibers6
, tobacco mosaic viruses7
, and bacteria flagella8
, and, because a 3D density map of a helical entity can be attained from a single projection image, compared to the many images required for 3D reconstruction of a non-helical object, with the IHRSR method, structural analysis of such flexible and disordered helical assemblies is now attainable.
In this video article, we provide detailed protocols for obtaining a 3D density map of a helical protein assembly (HIV-1 capsid9
is our example), including protocols for cryo-EM specimen preparation, low dose data collection by cryo-EM, indexing of helical diffraction patterns, and image processing and 3D reconstruction using IHRSR. Compared to other techniques, cryo-EM offers optimal specimen preservation under near native conditions. Samples are embedded in a thin layer of vitreous ice, by rapid freezing, and imaged in electron microscopes at liquid nitrogen temperature, under low dose conditions to minimize the radiation damage. Sample images are obtained under near native conditions at the expense of low signal and low contrast in the recorded micrographs. Fortunately, the process of helical reconstruction has largely been automated, with the exception of indexing the helical diffraction pattern. Here, we describe an approach to index helical structure and determine helical symmetries (helical parameters) from digitized micrographs, an essential step for 3D helical reconstruction. Briefly, we obtain an initial 3D density map by applying the IHRSR method. This initial map is then iteratively refined by introducing constraints for the alignment parameters of each segment, thus controlling their degrees of freedom. Further improvement is achieved by correcting for the contrast transfer function (CTF) of the electron microscope (amplitude and phase correction) and by optimizing the helical symmetry of the assembly.
Immunology, Issue 54, cryo-electron microscopy, helical indexing, helical real-space reconstruction, tubular assemblies, HIV-1 capsid
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Averaging of Viral Envelope Glycoprotein Spikes from Electron Cryotomography Reconstructions using Jsubtomo
Institutions: University of Oxford.
Enveloped viruses utilize membrane glycoproteins on their surface to mediate entry into host cells. Three-dimensional structural analysis of these glycoprotein ‘spikes’ is often technically challenging but important for understanding viral pathogenesis and in drug design. Here, a protocol is presented for viral spike structure determination through computational averaging of electron cryo-tomography data. Electron cryo-tomography is a technique in electron microscopy used to derive three-dimensional tomographic volume reconstructions, or tomograms, of pleomorphic biological specimens such as membrane viruses in a near-native, frozen-hydrated state. These tomograms reveal structures of interest in three dimensions, albeit at low resolution. Computational averaging of sub-volumes, or sub-tomograms, is necessary to obtain higher resolution detail of repeating structural motifs, such as viral glycoprotein spikes. A detailed computational approach for aligning and averaging sub-tomograms using the Jsubtomo software package is outlined. This approach enables visualization of the structure of viral glycoprotein spikes to a resolution in the range of 20-40 Å and study of the study of higher order spike-to-spike interactions on the virion membrane. Typical results are presented for Bunyamwera virus, an enveloped virus from the family Bunyaviridae
. This family is a structurally diverse group of pathogens posing a threat to human and animal health.
Immunology, Issue 92, electron cryo-microscopy, cryo-electron microscopy, electron cryo-tomography, cryo-electron tomography, glycoprotein spike, enveloped virus, membrane virus, structure, subtomogram, averaging
Scalable Nanohelices for Predictive Studies and Enhanced 3D Visualization
Institutions: University of California Merced, University of California Merced.
Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO2
) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of “bulk” silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for mechanical energy harvesting purposes.
Physics, Issue 93, Helical atomistic models; open-source coding; graphical user interface; visualization software; molecular dynamics simulations; graphical processing unit accelerated simulations.
Optimized Negative Staining: a High-throughput Protocol for Examining Small and Asymmetric Protein Structure by Electron Microscopy
Institutions: The Molecular Foundry.
Structural determination of proteins is rather challenging for proteins with molecular masses between 40 - 200 kDa. Considering that more than half of natural proteins have a molecular mass between 40 - 200 kDa1,2
, a robust and high-throughput method with a nanometer resolution capability is needed. Negative staining (NS) electron microscopy (EM) is an easy, rapid, and qualitative approach which has frequently been used in research laboratories to examine protein structure and protein-protein interactions. Unfortunately, conventional NS protocols often generate structural artifacts on proteins, especially with lipoproteins that usually form presenting rouleaux artifacts. By using images of lipoproteins from cryo-electron microscopy (cryo-EM) as a standard, the key parameters in NS specimen preparation conditions were recently screened and reported as the optimized NS protocol (OpNS), a modified conventional NS protocol 3
. Artifacts like rouleaux can be greatly limited by OpNS, additionally providing high contrast along with reasonably high‐resolution (near 1 nm) images of small and asymmetric proteins. These high-resolution and high contrast images are even favorable for an individual protein (a single object, no average) 3D reconstruction, such as a 160 kDa antibody, through the method of electron tomography4,5
. Moreover, OpNS can be a high‐throughput tool to examine hundreds of samples of small proteins. For example, the previously published mechanism of 53 kDa cholesteryl ester transfer protein (CETP) involved the screening and imaging of hundreds of samples 6
. Considering cryo-EM rarely successfully images proteins less than 200 kDa has yet to publish any study involving screening over one hundred sample conditions, it is fair to call OpNS a high-throughput method for studying small proteins. Hopefully the OpNS protocol presented here can be a useful tool to push the boundaries of EM and accelerate EM studies into small protein structure, dynamics and mechanisms.
Environmental Sciences, Issue 90, small and asymmetric protein structure, electron microscopy, optimized negative staining
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Synthesis of an Intein-mediated Artificial Protein Hydrogel
Institutions: Texas A&M University, College Station, Texas A&M University, College Station.
We present the synthesis of a highly stable protein hydrogel mediated by a split-intein-catalyzed protein trans
-splicing reaction. The building blocks of this hydrogel are two protein block-copolymers each containing a subunit of a trimeric protein that serves as a crosslinker and one half of a split intein. A highly hydrophilic random coil is inserted into one of the block-copolymers for water retention. Mixing of the two protein block copolymers triggers an intein trans
-splicing reaction, yielding a polypeptide unit with crosslinkers at either end that rapidly self-assembles into a hydrogel. This hydrogel is very stable under both acidic and basic conditions, at temperatures up to 50 °C, and in organic solvents. The hydrogel rapidly reforms after shear-induced rupture. Incorporation of a "docking station peptide" into the hydrogel building block enables convenient incorporation of "docking protein"-tagged target proteins. The hydrogel is compatible with tissue culture growth media, supports the diffusion of 20 kDa molecules, and enables the immobilization of bioactive globular proteins. The application of the intein-mediated protein hydrogel as an organic-solvent-compatible biocatalyst was demonstrated by encapsulating the horseradish peroxidase enzyme and corroborating its activity.
Bioengineering, Issue 83, split-intein, self-assembly, shear-thinning, enzyme, immobilization, organic synthesis
Visualization of Recombinant DNA and Protein Complexes Using Atomic Force Microscopy
Institutions: Seattle University, Seattle University.
Atomic force microscopy (AFM) allows for the visualizing of individual proteins, DNA molecules, protein-protein complexes, and DNA-protein complexes. On the end of the microscope's cantilever is a nano-scale probe, which traverses image areas ranging from nanometers to micrometers, measuring the elevation of macromolecules resting on the substrate surface at any given point. Electrostatic forces cause proteins, lipids, and nucleic acids to loosely attach to the substrate in random orientations and permit imaging. The generated data resemble a topographical map, where the macromolecules resolve as three-dimensional particles of discrete sizes (Figure 1
. Tapping mode AFM involves the repeated oscillation of the cantilever, which permits imaging of relatively soft biomaterials such as DNA and proteins. One of the notable benefits of AFM over other nanoscale microscopy techniques is its relative adaptability to visualize individual proteins and macromolecular complexes in aqueous buffers, including near-physiologic buffered conditions, in real-time, and without staining or coating the sample to be imaged.
The method presented here describes the imaging of DNA and an immunoadsorbed transcription factor (i.e. the glucocorticoid receptor, GR) in buffered solution (Figure 2
). Immunoadsorbed proteins and protein complexes can be separated from the immunoadsorbing antibody-bead pellet by competition with the antibody epitope and then imaged (Figure 2A
). This allows for biochemical manipulation of the biomolecules of interest prior to imaging. Once purified, DNA and proteins can be mixed and the resultant interacting complex can be imaged as well. Binding of DNA to mica requires a divalent cation 3
,such as Ni2+
, which can be added to sample buffers yet maintain protein activity. Using a similar approach, AFM has been utilized to visualize individual enzymes, including RNA polymerase 4
and a repair enzyme 5
, bound to individual DNA strands. These experiments provide significant insight into the protein-protein and DNA-protein biophysical interactions taking place at the molecular level. Imaging individual macromolecular particles with AFM can be useful for determining particle homogeneity and for identifying the physical arrangement of constituent components of the imaged particles. While the present method was developed for visualization of GR-chaperone protein complexes 1,2
and DNA strands to which the GR can bind, it can be applied broadly to imaging DNA and protein samples from a variety of sources.
Bioengineering, Issue 53, atomic force microscopy, glucocorticoid receptor, protein-protein interaction, DNA-protein interaction, scanning probe microscopy, immunoadsorption
Large-scale Gene Knockdown in C. elegans Using dsRNA Feeding Libraries to Generate Robust Loss-of-function Phenotypes
Institutions: University of Massachusetts, Amherst, University of Massachusetts, Amherst, University of Massachusetts, Amherst.
RNA interference by feeding worms bacteria expressing dsRNAs has been a useful tool to assess gene function in C. elegans
. While this strategy works well when a small number of genes are targeted for knockdown, large scale feeding screens show variable knockdown efficiencies, which limits their utility. We have deconstructed previously published RNAi knockdown protocols and found that the primary source of the reduced knockdown can be attributed to the loss of dsRNA-encoding plasmids from the bacteria fed to the animals. Based on these observations, we have developed a dsRNA feeding protocol that greatly reduces or eliminates plasmid loss to achieve efficient, high throughput knockdown. We demonstrate that this protocol will produce robust, reproducible knock down of C. elegans
genes in multiple tissue types, including neurons, and will permit efficient knockdown in large scale screens. This protocol uses a commercially available dsRNA feeding library and describes all steps needed to duplicate the library and perform dsRNA screens. The protocol does not require the use of any sophisticated equipment, and can therefore be performed by any C. elegans
Developmental Biology, Issue 79, Caenorhabditis elegans (C. elegans), Gene Knockdown Techniques, C. elegans, dsRNA interference, gene knockdown, large scale feeding screen
Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images
Institutions: Virginia Commonwealth University, Virginia Commonwealth University Reanimation Engineering Science (VCURES) Center, Virginia Commonwealth University, Virginia Commonwealth University, Virginia Commonwealth University.
In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CT scan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shape matching. These processes mimic the measuring process by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP are also incorporated. Machine learning techniques including feature selection and classification, such as Support Vector Machines (SVMs), are employed to build the prediction model using RapidMiner. The evaluation of the prediction shows potential usefulness of the model. The estimated ideal midline shift and predicted ICP levels may be used as a fast pre-screening step for physicians to make decisions, so as to recommend for or against invasive ICP monitoring.
Medicine, Issue 74, Biomedical Engineering, Molecular Biology, Neurobiology, Biophysics, Physiology, Anatomy, Brain CT Image Processing, CT, Midline Shift, Intracranial Pressure Pre-screening, Gaussian Mixture Model, Shape Matching, Machine Learning, traumatic brain injury, TBI, imaging, clinical techniques
Analyzing and Building Nucleic Acid Structures with 3DNA
Institutions: Rutgers - The State University of New Jersey, Columbia University .
The 3DNA software package is a popular and versatile bioinformatics tool with capabilities to analyze, construct, and visualize three-dimensional nucleic acid structures. This article presents detailed protocols for a subset of new and popular features available in 3DNA, applicable to both individual structures and ensembles of related structures. Protocol 1 lists the set of instructions needed to download and install the software. This is followed, in Protocol 2, by the analysis of a nucleic acid structure, including the assignment of base pairs and the determination of rigid-body parameters that describe the structure and, in Protocol 3, by a description of the reconstruction of an atomic model of a structure from its rigid-body parameters. The most recent version of 3DNA, version 2.1, has new features for the analysis and manipulation of ensembles of structures, such as those deduced from nuclear magnetic resonance (NMR) measurements and molecular dynamic (MD) simulations; these features are presented in Protocols 4 and 5. In addition to the 3DNA stand-alone software package, the w3DNA web server, located at https://w3dna.rutgers.edu, provides a user-friendly interface to selected features of the software. Protocol 6 demonstrates a novel feature of the site for building models of long DNA molecules decorated with bound proteins at user-specified locations.
Genetics, Issue 74, Molecular Biology, Biochemistry, Bioengineering, Biophysics, Genomics, Chemical Biology, Quantitative Biology, conformational analysis, DNA, high-resolution structures, model building, molecular dynamics, nucleic acid structure, RNA, visualization, bioinformatics, three-dimensional, 3DNA, software
Predicting the Effectiveness of Population Replacement Strategy Using Mathematical Modeling
Institutions: University of California, Los Angeles.
Charles Taylor and John Marshall explain the utility of mathematical modeling for evaluating the effectiveness of population replacement strategy. Insight is given into how computational models can provide information on the population dynamics of mosquitoes and the spread of transposable elements through A. gambiae subspecies. The ethical considerations of releasing genetically modified mosquitoes into the wild are discussed.
Cellular Biology, Issue 5, mosquito, malaria, popuulation, replacement, modeling, infectious disease