Many behavior tests designed to assess learning and memory in rodents, particularly mice, rely on visual cues, food and/or water deprivation, or other aversive stimuli to motivate task acquisition. As animals age, sensory modalities deteriorate. For example, many strains of mice develop hearing deficits or cataracts. Changes in the sensory systems required to guide mice during task acquisition present potential confounds in interpreting learning changes in aging animals. Moreover, the use of aversive stimuli to motivate animals to learn tasks is potentially confounding when comparing mice with differential sensitivities to stress. To minimize these types of confounding effects, we have implemented a modified version of the Lashley III maze. This maze relies on route learning, whereby mice learn to navigate a maze via repeated exposure under low stress conditions, e.g. dark phase, no food/water deprivation, until they navigate a path from the start location to a pseudo-home cage with 0 or 1 error(s) on two consecutive trials. We classify this as a low-stress behavior test because it does not rely on aversive stimuli to encourage exploration of the maze and learning of the task. The apparatus consists of a modular start box, a 4-arm maze body, and a goal box. At the end of the goal box is a pseudo-home cage that contains bedding similar to that found in the animal’s home cage and is specific to each animal for the duration of maze testing. It has been demonstrated previously that this pseudo-home cage provides sufficient reward to motivate mice to learn to navigate the maze1. Here, we present the visualization of the Lashley III maze procedure in the context of evaluating age-related differences in learning and memory in mice along with a comparison of learning behavior in two different background strains of mice. We hope that other investigators interested in evaluating the effects of aging or stress vulnerability in mice will consider this maze an attractive alternative to behavioral tests that involve more stressful learning tasks and/or visual cues.
25 Related JoVE Articles!
A Simple and Efficient Method to Detect Nuclear Factor Activation in Human Neutrophils by Flow Cytometry
Institutions: University of Alberta, Universidad Nacional Autónoma de México, Universidad Nacional Autónoma de México.
Neutrophils are the most abundant leukocytes in peripheral blood. These cells are the first to appear at sites of inflammation and infection, thus becoming the first line of defense against invading microorganisms. Neutrophils possess important antimicrobial functions such as phagocytosis, release of lytic enzymes, and production of reactive oxygen species. In addition to these important defense functions, neutrophils perform other tasks in response to infection such as production of proinflammatory cytokines and inhibition of apoptosis. Cytokines recruit other leukocytes that help clear the infection, and inhibition of apoptosis allows the neutrophil to live longer at the site of infection. These functions are regulated at the level of transcription. However, because neutrophils are short-lived cells, the study of transcriptionally regulated responses in these cells cannot be performed with conventional reporter gene methods since there are no efficient techniques for neutrophil transfection. Here, we present a simple and efficient method that allows detection and quantification of nuclear factors in isolated and immunolabeled nuclei by flow cytometry. We describe techniques to isolate pure neutrophils from human peripheral blood, stimulate these cells with anti-receptor antibodies, isolate and immunolabel nuclei, and analyze nuclei by flow cytometry. The method has been successfully used to detect NF-κB and Elk-1 nuclear factors in nuclei from neutrophils and other cell types. Thus, this method represents an option for analyzing activation of transcription factors in isolated nuclei from a variety of cell types.
Immunology, Issue 74, Biochemistry, Infection, Cellular Biology, Molecular Biology, Medicine, Neutrophils, Neutrophil, Monocyte, PMN, NF- κB, ERK, integrin, Signal Transduction, inflammation, flow cytometry, immunolabeling, nuclear factors, cytokines, cells, assay
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
A New Approach for the Comparative Analysis of Multiprotein Complexes Based on 15N Metabolic Labeling and Quantitative Mass Spectrometry
Institutions: University of Münster, Carnegie Institution for Science.
The introduced protocol provides a tool for the analysis of multiprotein complexes in the thylakoid membrane, by revealing insights into complex composition under different conditions. In this protocol the approach is demonstrated by comparing the composition of the protein complex responsible for cyclic electron flow (CEF) in Chlamydomonas reinhardtii
, isolated from genetically different strains. The procedure comprises the isolation of thylakoid membranes, followed by their separation into multiprotein complexes by sucrose density gradient centrifugation, SDS-PAGE, immunodetection and comparative, quantitative mass spectrometry (MS) based on differential metabolic labeling (14
N) of the analyzed strains. Detergent solubilized thylakoid membranes are loaded on sucrose density gradients at equal chlorophyll concentration. After ultracentrifugation, the gradients are separated into fractions, which are analyzed by mass-spectrometry based on equal volume. This approach allows the investigation of the composition within the gradient fractions and moreover to analyze the migration behavior of different proteins, especially focusing on ANR1, CAS, and PGRL1. Furthermore, this method is demonstrated by confirming the results with immunoblotting and additionally by supporting the findings from previous studies (the identification and PSI-dependent migration of proteins that were previously described to be part of the CEF-supercomplex such as PGRL1, FNR, and cyt f
). Notably, this approach is applicable to address a broad range of questions for which this protocol can be adopted and e.g.
used for comparative analyses of multiprotein complex composition isolated from distinct environmental conditions.
Microbiology, Issue 85, Sucrose density gradients, Chlamydomonas, multiprotein complexes, 15N metabolic labeling, thylakoids
Barnes Maze Testing Strategies with Small and Large Rodent Models
Institutions: University of Missouri, Food and Drug Administration.
Spatial learning and memory of laboratory rodents is often assessed via navigational ability in mazes, most popular of which are the water and dry-land (Barnes) mazes. Improved performance over sessions or trials is thought to reflect learning and memory of the escape cage/platform location. Considered less stressful than water mazes, the Barnes maze is a relatively simple design of a circular platform top with several holes equally spaced around the perimeter edge. All but one of the holes are false-bottomed or blind-ending, while one leads to an escape cage. Mildly aversive stimuli (e.g.
bright overhead lights) provide motivation to locate the escape cage. Latency to locate the escape cage can be measured during the session; however, additional endpoints typically require video recording. From those video recordings, use of automated tracking software can generate a variety of endpoints that are similar to those produced in water mazes (e.g.
distance traveled, velocity/speed, time spent in the correct quadrant, time spent moving/resting, and confirmation of latency). Type of search strategy (i.e.
random, serial, or direct) can be categorized as well. Barnes maze construction and testing methodologies can differ for small rodents, such as mice, and large rodents, such as rats. For example, while extra-maze cues are effective for rats, smaller wild rodents may require intra-maze cues with a visual barrier around the maze. Appropriate stimuli must be identified which motivate the rodent to locate the escape cage. Both Barnes and water mazes can be time consuming as 4-7 test trials are typically required to detect improved learning and memory performance (e.g.
shorter latencies or path lengths to locate the escape platform or cage) and/or differences between experimental groups. Even so, the Barnes maze is a widely employed behavioral assessment measuring spatial navigational abilities and their potential disruption by genetic, neurobehavioral manipulations, or drug/ toxicant exposure.
Behavior, Issue 84, spatial navigation, rats, Peromyscus, mice, intra- and extra-maze cues, learning, memory, latency, search strategy, escape motivation
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
A Restriction Enzyme Based Cloning Method to Assess the In vitro Replication Capacity of HIV-1 Subtype C Gag-MJ4 Chimeric Viruses
Institutions: Emory University, Emory University.
The protective effect of many HLA class I alleles on HIV-1 pathogenesis and disease progression is, in part, attributed to their ability to target conserved portions of the HIV-1 genome that escape with difficulty. Sequence changes attributed to cellular immune pressure arise across the genome during infection, and if found within conserved regions of the genome such as Gag, can affect the ability of the virus to replicate in vitro
. Transmission of HLA-linked polymorphisms in Gag to HLA-mismatched recipients has been associated with reduced set point viral loads. We hypothesized this may be due to a reduced replication capacity of the virus. Here we present a novel method for assessing the in vitro
replication of HIV-1 as influenced by the gag
gene isolated from acute time points from subtype C infected Zambians. This method uses restriction enzyme based cloning to insert the gag
gene into a common subtype C HIV-1 proviral backbone, MJ4. This makes it more appropriate to the study of subtype C sequences than previous recombination based methods that have assessed the in vitro
replication of chronically derived gag-pro
sequences. Nevertheless, the protocol could be readily modified for studies of viruses from other subtypes. Moreover, this protocol details a robust and reproducible method for assessing the replication capacity of the Gag-MJ4 chimeric viruses on a CEM-based T cell line. This method was utilized for the study of Gag-MJ4 chimeric viruses derived from 149 subtype C acutely infected Zambians, and has allowed for the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
A Technique to Screen American Beech for Resistance to the Beech Scale Insect (Cryptococcus fagisuga Lind.)
Institutions: US Forest Service.
Beech bark disease (BBD) results in high levels of initial mortality, leaving behind survivor trees that are greatly weakened and deformed. The disease is initiated by feeding activities of the invasive beech scale insect, Cryptococcus fagisuga
, which creates entry points for infection by one of the Neonectria
species of fungus. Without scale infestation, there is little opportunity for fungal infection. Using scale eggs to artificially infest healthy trees in heavily BBD impacted stands demonstrated that these trees were resistant to the scale insect portion of the disease complex1
. Here we present a protocol that we have developed, based on the artificial infestation technique by Houston2
, which can be used to screen for scale-resistant trees in the field and in smaller potted seedlings and grafts. The identification of scale-resistant trees is an important component of management of BBD through tree improvement programs and silvicultural manipulation.
Environmental Sciences, Issue 87, Forestry, Insects, Disease Resistance, American beech, Fagus grandifolia, beech scale, Cryptococcus fagisuga, resistance, screen, bioassay
Design and Construction of an Urban Runoff Research Facility
Institutions: Texas A&M University, The Scotts Miracle-Gro Company.
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2
facility was constructed which consists of 24 individual 33.6 m2
field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4
, and Ca2+
had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems.
Environmental Sciences, Issue 90, urban runoff, landscapes, home lawns, turfgrass, St. Augustinegrass, carbon, nitrogen, phosphorus, sodium
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ
measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration.
Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release.
The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
Extraction of Venom and Venom Gland Microdissections from Spiders for Proteomic and Transcriptomic Analyses
Institutions: University of Massachusetts Lowell.
Venoms are chemically complex secretions typically comprising numerous proteins and peptides with varied physiological activities. Functional characterization of venom proteins has important biomedical applications, including the identification of drug leads or probes for cellular receptors. Spiders are the most species rich clade of venomous organisms, but the venoms of only a few species are well-understood, in part due to the difficulty associated with collecting minute quantities of venom from small animals. This paper presents a protocol for the collection of venom from spiders using electrical stimulation, demonstrating the procedure on the Western black widow (Latrodectus hesperus
). The collected venom is useful for varied downstream analyses including direct protein identification via mass spectrometry, functional assays, and stimulation of venom gene expression for transcriptomic studies. This technique has the advantage over protocols that isolate venom from whole gland homogenates, which do not separate genuine venom components from cellular proteins that are not secreted as part of the venom. Representative results demonstrate the detection of known venom peptides from the collected sample using mass spectrometry. The venom collection procedure is followed by a protocol for dissecting spider venom glands, with results demonstrating that this leads to the characterization of venom-expressed proteins and peptides at the sequence level.
Genetics, Issue 93, spider, toxin, proteomics, transcriptomics, electrical stimulation, Latrodectus
Application of Two-spotted Spider Mite Tetranychus urticae for Plant-pest Interaction Studies
Institutions: The University of Western Ontario, Instituto de Ciencias de la Vid y el Vino, Ghent University, University of Amsterdam.
The two-spotted spider mite, Tetranychus urticae
, is a ubiquitous polyphagous arthropod herbivore that feeds on a remarkably broad array of species, with more than 150 of economic value. It is a major pest of greenhouse crops, especially in Solanaceae
, tomatoes, eggplants, peppers, cucumbers, zucchini) and greenhouse ornamentals (e.g.
, roses, chrysanthemum, carnations), annual field crops (such as maize, cotton, soybean, and sugar beet), and in perennial cultures (alfalfa, strawberries, grapes, citruses, and plums)1,2
. In addition to the extreme polyphagy that makes it an important agricultural pest, T. urticae
has a tendency to develop resistance to a wide array of insecticides and acaricides that are used for its control3-7
is an excellent experimental organism, as it has a rapid life cycle (7 days at 27 °C) and can be easily maintained at high density in the laboratory. Methods to assay gene expression (including in situ
hybridization and antibody staining) and to inactivate expression of spider mite endogenous genes using RNA interference have been developed8-10
. Recently, the whole genome sequence of T. urticae
has been reported, creating an opportunity to develop this pest herbivore as a model organism with equivalent genomic resources that already exist in some of its host plants (Arabidopsis thaliana
and the tomato Solanum lycopersicum
. Together, these model organisms could provide insights into molecular bases of plant-pest interactions.
Here, an efficient method for quick and easy collection of a large number of adult female mites, their application on an experimental plant host, and the assessment of the plant damage due to spider mite feeding are described. The presented protocol enables fast and efficient collection of hundreds of individuals at any developmental stage (eggs, larvae, nymphs, adult males, and females) that can be used for subsequent experimental application.
Environmental Sciences, Issue 89, two-spotted spider mite, plant-herbivore interaction, Tetranychus urticae, Arabidopsis thaliana, plant damage analysis, herbivory, plant pests
A Procedure to Observe Context-induced Renewal of Pavlovian-conditioned Alcohol-seeking Behavior in Rats
Institutions: Concordia University.
Environmental contexts in which drugs of abuse are consumed can trigger craving, a subjective Pavlovian-conditioned response that can facilitate drug-seeking behavior and prompt relapse in abstinent drug users. We have developed a procedure to study the behavioral and neural processes that mediate the impact of context on alcohol-seeking behavior in rats. Following acclimation to the taste and pharmacological effects of 15% ethanol in the home cage, male Long-Evans rats receive Pavlovian discrimination training (PDT) in conditioning chambers. In each daily (Mon-Fri) PDT session, 16 trials each of two different 10 sec auditory conditioned stimuli occur. During one stimulus, the CS+, 0.2 ml of 15% ethanol is delivered into a fluid port for oral consumption. The second stimulus, the CS-, is not paired with ethanol. Across sessions, entries into the fluid port during the CS+ increase, whereas entries during the CS- stabilize at a lower level, indicating that a predictive association between the CS+ and ethanol is acquired. During PDT each chamber is equipped with a specific configuration of visual, olfactory and tactile contextual stimuli. Following PDT, extinction training is conducted in the same chamber that is now equipped with a different configuration of contextual stimuli. The CS+ and CS- are presented as before, but ethanol is withheld, which causes a gradual decline in port entries during the CS+. At test, rats are placed back into the PDT context and presented with the CS+ and CS- as before, but without ethanol. This manipulation triggers a robust and selective increase in the number of port entries made during the alcohol predictive CS+, with no change in responding during the CS-. This effect, referred to as context-induced renewal, illustrates the powerful capacity of contexts associated with alcohol consumption to stimulate alcohol-seeking behavior in response to Pavlovian alcohol cues.
Behavior, Issue 91, Behavioral neuroscience, alcoholism, relapse, addiction, Pavlovian conditioning, ethanol, reinstatement, discrimination, conditioned approach
Synthetic Spider Silk Production on a Laboratory Scale
Institutions: University of the Pacific.
As society progresses and resources become scarcer, it is becoming increasingly important to cultivate new technologies that engineer next generation biomaterials with high performance properties. The development of these new structural materials must be rapid, cost-efficient and involve processing methodologies and products that are environmentally friendly and sustainable. Spiders spin a multitude of different fiber types with diverse mechanical properties, offering a rich source of next generation engineering materials for biomimicry that rival the best manmade and natural materials. Since the collection of large quantities of natural spider silk is impractical, synthetic silk production has the ability to provide scientists with access to an unlimited supply of threads. Therefore, if the spinning process can be streamlined and perfected, artificial spider fibers have the potential use for a broad range of applications ranging from body armor, surgical sutures, ropes and cables, tires, strings for musical instruments, and composites for aviation and aerospace technology. In order to advance the synthetic silk production process and to yield fibers that display low variance in their material properties from spin to spin, we developed a wet-spinning protocol that integrates expression of recombinant spider silk proteins in bacteria, purification and concentration of the proteins, followed by fiber extrusion and a mechanical post-spin treatment. This is the first visual representation that reveals a step-by-step process to spin and analyze artificial silk fibers on a laboratory scale. It also provides details to minimize the introduction of variability among fibers spun from the same spinning dope. Collectively, these methods will propel the process of artificial silk production, leading to higher quality fibers that surpass natural spider silks.
Bioengineering, Issue 65, Biochemistry, Spider silk, fibroins, synthetic spider silk, silk-producing glands, wet-spinning, post-spin draw
A Novel Method for Assessing Proximal and Distal Forelimb Function in the Rat: the Irvine, Beatties and Bresnahan (IBB) Forelimb Scale
Institutions: University of California, San Francisco.
Several experimental models of cervical spinal cord injury (SCI) have been developed recently to assess the consequences of damage to this level of the spinal cord (Pearse et al.
, 2005, Gensel et al.
, 2006, Anderson et al.
, 2009), as the majority of human SCI occur here (Young, 2010; www.sci-info-pages.com). Behavioral deficits include loss of forelimb function due to damage to the white matter affecting both descending motor and ascending sensory systems, and to the gray matter containing the segmental circuitry for processing sensory input and motor output for the forelimb. Additionally, a key priority for human patients with cervical SCI is restoration of hand/arm function (Anderson, 2004). Thus, outcome measures that assess both proximal and distal forelimb function are needed. Although there are several behavioral assays that are sensitive to different aspects of forelimb recovery in experimental models of cervical SCI (Girgis et al.
, 2007, Gensel et al.
, 2006, Ballerman et al.
, 2001, Metz and Whishaw, 2000, Bertelli and Mira, 1993, Montoya et al.
, 1991, Whishaw and Pellis, 1990), few techniques provide detailed information on the recovery of fine motor control and digit movement.
The current measurement technique, the Irvine, Beatties and Bresnahan forelimb scale (IBB), can detect recovery of both proximal and distal forelimb function including digit movements during a naturally occurring behavior that does not require extensive training or deprivation to enhance motivation. The IBB was generated by observing recovery after a unilateral C6 SCI, and involves video recording of animals eating two differently shaped cereals (spherical and doughnut) of a consistent size. These videos were then used to assess features of forelimb use, such as joint position, object support, digit movement and grasping technique.
The IBB, like other forelimb behavioral tasks, shows a consistent pattern of recovery that is sensitive to injury severity. Furthermore, the IBB scale could be used to assess recovery following other types of injury that impact normal forelimb function.
Neuroscience, Issue 46, spinal cord injury, recovery of function, forelimb function, neurological test, cervical injuries
Microdissection of Black Widow Spider Silk-producing Glands
Institutions: University of the Pacific.
Modern spiders spin high-performance silk fibers with a broad range of biological functions, including locomotion, prey capture and protection of developing offspring 1,2
. Spiders accomplish these tasks by spinning several distinct fiber types that have diverse mechanical properties. Such specialization of fiber types has occurred through the evolution of different silk-producing glands, which function as small biofactories. These biofactories manufacture and store large quantities of silk proteins for fiber production. Through a complex series of biochemical events, these silk proteins are converted from a liquid into a solid material upon extrusion.
Mechanical studies have demonstrated that spider silks are stronger than high-tensile steel 3
. Analyses to understand the relationship between the structure and function of spider silk threads have revealed that spider silk consists largely of proteins, or fibroins, that have block repeats within their protein sequences 4
. Common molecular signatures that contribute to the incredible tensile strength and extensibility of spider silks are being unraveled through the analyses of translated silk cDNAs. Given the extraordinary material properties of spider silks, research labs across the globe are racing to understand and mimic the spinning process to produce synthetic silk fibers for commercial, military and industrial applications. One of the main challenges to spinning artificial spider silk in the research lab involves a complete understanding of the biochemical processes that occur during extrusion of the fibers from the silk-producing glands.
Here we present a method for the isolation of the seven different silk-producing glands from the cobweaving black widow spider, which includes the major and minor ampullate glands [manufactures dragline and scaffolding silk] 5,6
, tubuliform [synthesizes egg case silk] 7,8
, flagelliform [unknown function in cob-weavers], aggregate [makes glue silk], aciniform [synthesizes prey wrapping and egg case threads] 9
and pyriform [produces attachment disc silk] 10
. This approach is based upon anesthetizing the spider with carbon dioxide gas, subsequent separation of the cephalothorax from the abdomen, and microdissection of the abdomen to obtain the silk-producing glands. Following the separation of the different silk-producing glands, these tissues can be used to retrieve different macromolecules for distinct biochemical analyses, including quantitative real-time PCR, northern- and western blotting, mass spectrometry (MS or MS/MS) analyses to identify new silk protein sequences, search for proteins that participate in the silk assembly pathway, or use the intact tissue for cell culture or histological experiments.
Cellular Biology, Issue 47, Spider silk, silk-producing glands, fibroins, structural proteins, spidroins
Single Particle Electron Microscopy Reconstruction of the Exosome Complex Using the Random Conical Tilt Method
Institutions: Yale University.
Single particle electron microscopy (EM) reconstruction has recently become a popular tool to get the three-dimensional (3D) structure of large macromolecular complexes. Compared to X-ray crystallography, it has some unique advantages. First, single particle EM reconstruction does not need to crystallize the protein sample, which is the bottleneck in X-ray crystallography, especially for large macromolecular complexes. Secondly, it does not need large amounts of protein samples. Compared with milligrams of proteins necessary for crystallization, single particle EM reconstruction only needs several micro-liters of protein solution at nano-molar concentrations, using the negative staining EM method. However, despite a few macromolecular assemblies with high symmetry, single particle EM is limited at relatively low resolution (lower than 1 nm resolution) for many specimens especially those without symmetry. This technique is also limited by the size of the molecules under study, i.e. 100 kDa for negatively stained specimens and 300 kDa for frozen-hydrated specimens in general.
For a new sample of unknown structure, we generally use a heavy metal solution to embed the molecules by negative staining. The specimen is then examined in a transmission electron microscope to take two-dimensional (2D) micrographs of the molecules. Ideally, the protein molecules have a homogeneous 3D structure but exhibit different orientations in the micrographs. These micrographs are digitized and processed in computers as "single particles". Using two-dimensional alignment and classification techniques, homogenous molecules in the same views are clustered into classes. Their averages enhance the signal of the molecule's 2D shapes. After we assign the particles with the proper relative orientation (Euler angles), we will be able to reconstruct the 2D particle images into a 3D virtual volume.
In single particle 3D reconstruction, an essential step is to correctly assign the proper orientation of each single particle. There are several methods to assign the view for each particle, including the angular reconstitution1
and random conical tilt (RCT) method2
. In this protocol, we describe our practice in getting the 3D reconstruction of yeast exosome complex using negative staining EM and RCT. It should be noted that our protocol of electron microscopy and image processing follows the basic principle of RCT but is not the only way to perform the method. We first describe how to embed the protein sample into a layer of Uranyl-Formate with a thickness comparable to the protein size, using a holey carbon grid covered with a layer of continuous thin carbon film. Then the specimen is inserted into a transmission electron microscope to collect untilted (0-degree) and tilted (55-degree) pairs of micrographs that will be used later for processing and obtaining an initial 3D model of the yeast exosome. To this end, we perform RCT and then refine the initial 3D model by using the projection matching refinement method3
Structural Biology, Issue 49, Electron microscopy, single particle three-dimensional reconstruction, exosome complex, negative staining
Isolation of Fidelity Variants of RNA Viruses and Characterization of Virus Mutation Frequency
Institutions: Institut Pasteur .
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3
. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7
Immunology, Issue 52, Polymerase fidelity, RNA virus, mutation frequency, mutagen, RNA polymerase, viral evolution
Structure of HIV-1 Capsid Assemblies by Cryo-electron Microscopy and Iterative Helical Real-space Reconstruction
Institutions: University of Pittsburgh School of Medicine.
Cryo-electron microscopy (cryo-EM), combined with image processing, is an increasingly powerful tool for structure determination of macromolecular protein complexes and assemblies. In fact, single particle electron microscopy1
and two-dimensional (2D) electron crystallography2
have become relatively routine methodologies and a large number of structures have been solved using these methods. At the same time, image processing and three-dimensional (3D) reconstruction of helical objects has rapidly developed, especially, the iterative helical real-space reconstruction (IHRSR) method3
, which uses single particle analysis tools in conjunction with helical symmetry. Many biological entities function in filamentous or helical forms, including actin filaments4
, amyloid fibers6
, tobacco mosaic viruses7
, and bacteria flagella8
, and, because a 3D density map of a helical entity can be attained from a single projection image, compared to the many images required for 3D reconstruction of a non-helical object, with the IHRSR method, structural analysis of such flexible and disordered helical assemblies is now attainable.
In this video article, we provide detailed protocols for obtaining a 3D density map of a helical protein assembly (HIV-1 capsid9
is our example), including protocols for cryo-EM specimen preparation, low dose data collection by cryo-EM, indexing of helical diffraction patterns, and image processing and 3D reconstruction using IHRSR. Compared to other techniques, cryo-EM offers optimal specimen preservation under near native conditions. Samples are embedded in a thin layer of vitreous ice, by rapid freezing, and imaged in electron microscopes at liquid nitrogen temperature, under low dose conditions to minimize the radiation damage. Sample images are obtained under near native conditions at the expense of low signal and low contrast in the recorded micrographs. Fortunately, the process of helical reconstruction has largely been automated, with the exception of indexing the helical diffraction pattern. Here, we describe an approach to index helical structure and determine helical symmetries (helical parameters) from digitized micrographs, an essential step for 3D helical reconstruction. Briefly, we obtain an initial 3D density map by applying the IHRSR method. This initial map is then iteratively refined by introducing constraints for the alignment parameters of each segment, thus controlling their degrees of freedom. Further improvement is achieved by correcting for the contrast transfer function (CTF) of the electron microscope (amplitude and phase correction) and by optimizing the helical symmetry of the assembly.
Immunology, Issue 54, cryo-electron microscopy, helical indexing, helical real-space reconstruction, tubular assemblies, HIV-1 capsid
A General Method for Evaluating Incubation of Sucrose Craving in Rats
Institutions: Western Washington University.
For someone on a food-restricted diet, food craving in response to food-paired cues may serve as a key behavioral transition point between abstinence and relapse to food taking 1
. Food craving conceptualized in this way is akin to drug craving in response to drug-paired cues. A rich literature has been developed around understanding the behavioral and neurobiological determinants of drug craving; we and others have been focusing recently on translating techniques from basic addiction research to better understand addiction-like behaviors related to food 2-4
As done in previous studies of drug craving, we examine sucrose craving behavior by utilizing a rat model of relapse. In this model, rats self-administer either drug or food in sessions over several days. In a session, lever responding delivers the reward along with a tone+light stimulus. Craving behavior is then operationally defined as responding in a subsequent session where the reward is not available. Rats will reliably respond for the tone+light stimulus, likely due to its acquired conditioned reinforcing properties 5
. This behavior is sometimes referred to as sucrose seeking or cue reactivity. In the present discussion we will use the term "sucrose craving" to subsume both of these constructs.
In the past decade, we have focused on how the length of time following reward self-administration influences reward craving. Interestingly, rats increase responding for the reward-paired cue over the course of several weeks of a period of forced-abstinence. This "incubation of craving" is observed in rats that have self-administered either food or drugs of abuse 4,6
. This time-dependent increase in craving we have identified in the animal model may have great potential relevance to human drug and food addiction behaviors.
Here we present a protocol for assessing incubation of sucrose craving in rats. Variants of the procedure will be indicated where craving is assessed as responding for a discrete sucrose-paired cue following extinction of lever pressing within the sucrose self-administration context (Extinction without cues) or as responding for sucrose-paired cues in a general extinction context (Extinction with cues).
Neuroscience, Issue 57, addiction, craving, cue-reactivity, extinction, reinstatement, relapse, sucrose seeking
Behavioral Determination of Stimulus Pair Discrimination of Auditory Acoustic and Electrical Stimuli Using a Classical Conditioning and Heart-rate Approach
Institutions: La Trobe University.
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants1-3
and auditory midbrain implants4,5
. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices6,7
Several techniques such as reward-based operant conditioning6-8
, conditioned avoidance9-11
, or classical fear conditioning12
have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed).
Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair.
This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
Neuroscience, Issue 64, Physiology, auditory, hearing, brainstem, stimulation, rat, abi
Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods
Institutions: Yale School of Medicine, Yale School of Medicine, New York University , New York University , New York University .
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity1,2
, the degree of those aversions vary substantially across individuals, such that the subjective value
of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method3
to assess the neural representation of the subjective values of risky and ambiguous options4
. This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations.
In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Neuroscience, Issue 67, Medicine, Molecular Biology, fMRI, magnetic resonance imaging, decision-making, value, uncertainty, risk, ambiguity
Implantation of Radiotelemetry Transmitters Yielding Data on ECG, Heart Rate, Core Body Temperature and Activity in Free-moving Laboratory Mice
Institutions: University Hospital Zurich, University of Zurich.
The laboratory mouse is the animal species of choice for most biomedical research, in both the academic sphere and the pharmaceutical industry. Mice are a manageable size and relatively easy to house. These factors, together with the availability of a wealth of spontaneous and experimentally induced mutants, make laboratory mice ideally suited to a wide variety of research areas.
In cardiovascular, pharmacological and toxicological research, accurate measurement of parameters relating to the circulatory system of laboratory animals is often required. Determination of heart rate, heart rate variability, and duration of PQ and QT intervals are based on electrocardiogram (ECG) recordings. However, obtaining reliable ECG curves as well as physiological data such as core body temperature in mice can be difficult using conventional measurement techniques, which require connecting sensors and lead wires to a restrained, tethered, or even anaesthetized animal. Data obtained in this fashion must be interpreted with caution, as it is well known that restraining and anesthesia can have a major artifactual influence on physiological parameters1, 2
Radiotelemetry enables data to be collected from conscious and untethered animals. Measurements can be conducted even in freely moving animals, and without requiring the investigator to be in the proximity of the animal. Thus, known sources of artifacts are avoided, and accurate and reliable measurements are assured. This methodology also reduces interanimal variability, thus reducing the number of animals used, rendering this technology the most humane method of monitoring physiological parameters in laboratory animals3, 4
. Constant advancements in data acquisition technology and implant miniaturization mean that it is now possible to record physiological parameters and locomotor activity continuously and in realtime over longer periods such as hours, days or even weeks3, 5
Here, we describe a surgical technique for implantation of a commercially available telemetry transmitter used for continuous measurements of core body temperature, locomotor activity and biopotential (i.e. onelead ECG), from which heart rate, heart rate variability, and PQ and QT intervals can be established in freeroaming, untethered mice. We also present pre-operative procedures and protocols for post-operative intensive care and pain treatment that improve recovery, well-being and survival rates in implanted mice5, 6
Medicine, Issue 57, telemetry, mouse, mice, transmitter implantation, humane endpoint, post-operative care, intensive care, recovery, surgery
Assessing Burrowing, Nest Construction, and Hoarding in Mice
Institutions: University of Oxford .
Deterioration in the ability to perform "Activities of daily living" (ADL) is an early sign of Alzheimer's disease (AD). Preclinical behavioural screening of possible treatments for AD currently largely focuses on cognitive testing, which frequently demands expensive equipment and lots of experimenter time. However, human episodic memory (the most severely affected aspect of memory in AD) is different to rodent memory, which seems to be largely non-episodic. Therefore the present ways of screening for new AD treatments for AD in rodents are intrinsically unlikely to succeed. A new approach to preclinical screening would be to characterise the ADL of mice. Fortuitously, several such assays have recently been developed at Oxford, and here the three most sensitive and well-characterised are presented.
Burrowing was first developed in Oxford13
. It evolved from a need to develop a mouse hoarding paradigm. Most published rodent hoarding paradigms required a distant food source to be linked to the home cage by a connecting passage. This would involve modifying the home cage as well as making a mouse-proof connecting passage and food source. So it was considered whether it would be possible to put the food source inside the cage. It was found that if a container was placed on the floor it was emptied by the next morning., The food pellets were, however, simply deposited in a heap at the container entrance, rather than placed in a discrete place away from the container, as might be expected if the mice were truly hoarding them. Close inspection showed that the mice were performing digging ("burrowing") movements, not carrying the pellets in their mouths to a selected place as they would if truly hoarding them.6
Food pellets are not an essential substrate for burrowing; mice will empty tubes filled with sand, gravel, even soiled bedding from their own cage. Moreover, they will empty a full tube even if an empty one is placed next to it8
Several nesting protocols exist in the literature. The present Oxford one simplifies the procedure and has a well-defined scoring system for nest quality5
A hoarding paradigm was later developed in which the mice, rather than hoarding back to the real home cage, were adapted to living in the "home base" of a hoarding apparatus. This home base was connected to a tube made of wire mesh, the distal end of which contained the food source. This arrangement proved to yield good hoarding behaviour, as long as the mice were adapted to living in the "home base" during the day and only allowed to enter the hoarding tube at night.
Neuroscience, Issue 59, Mice, murine, burrowing, nesting, hoarding, hippocampus, Alzheimer’s, prion, species-typical, welfare, 3Rs
Improving IV Insulin Administration in a Community Hospital
Institutions: Wyoming Medical Center.
Diabetes mellitus is a major independent risk factor for increased morbidity and mortality in the hospitalized patient, and elevated blood glucose concentrations, even in non-diabetic patients, predicts poor outcomes.1-4
The 2008 consensus statement by the American Association of Clinical Endocrinologists (AACE) and the American Diabetes Association (ADA) states that "hyperglycemia in hospitalized patients, irrespective of its cause, is unequivocally associated with adverse outcomes."5
It is important to recognize that hyperglycemia occurs in patients with known or undiagnosed diabetes as well as during acute illness in those with previously normal glucose tolerance.
The Normoglycemia in Intensive Care Evaluation-Survival Using Glucose Algorithm Regulation (NICE-SUGAR) study involved over six thousand adult intensive care unit (ICU) patients who were randomized to intensive glucose control or conventional glucose control.6
Surprisingly, this trial found that intensive glucose control increased the risk of mortality by 14% (odds ratio, 1.14; p=0.02). In addition, there was an increased prevalence of severe hypoglycemia in the intensive control group compared with the conventional control group (6.8% vs. 0.5%, respectively; p<0.001). From this pivotal trial and two others,7,8
Wyoming Medical Center (WMC) realized the importance of controlling hyperglycemia in the hospitalized patient while avoiding the negative impact of resultant hypoglycemia.
Despite multiple revisions of an IV insulin paper protocol, analysis of data from usage of the paper protocol at WMC shows that in terms of achieving normoglycemia while minimizing hypoglycemia, results were suboptimal. Therefore, through a systematical implementation plan, monitoring of patient blood glucose levels was switched from using a paper IV insulin protocol to a computerized glucose management system. By comparing blood glucose levels using the paper protocol to that of the computerized system, it was determined, that overall, the computerized glucose management system resulted in more rapid and tighter glucose control than the traditional paper protocol. Specifically, a substantial increase in the time spent within the target blood glucose concentration range, as well as a decrease in the prevalence of severe hypoglycemia (BG < 40 mg/dL), clinical hypoglycemia (BG < 70 mg/dL), and hyperglycemia (BG > 180 mg/dL), was witnessed in the first five months after implementation of the computerized glucose management system. The computerized system achieved target concentrations in greater than 75% of all readings while minimizing the risk of hypoglycemia. The prevalence of hypoglycemia (BG < 70 mg/dL) with the use of the computer glucose management system was well under 1%.
Medicine, Issue 64, Physiology, Computerized glucose management, Endotool, hypoglycemia, hyperglycemia, diabetes, IV insulin, paper protocol, glucose control
Minimal Erythema Dose (MED) Testing
Institutions: Fox Chase Cancer Center , University of Pennsylvania , Drexel University , Fox Chase Cancer Center , The Cancer Institute of New Jersey.
Ultraviolet radiation (UV) therapy is sometimes used as a treatment for various common skin conditions, including psoriasis, acne, and eczema. The dosage of UV light is prescribed according to an individual's skin sensitivity. Thus, to establish the proper dosage of UV light to administer to a patient, the patient is sometimes screened to determine a minimal erythema dose (MED), which is the amount of UV radiation that will produce minimal erythema (sunburn or redness caused by engorgement of capillaries) of an individual's skin within a few hours following exposure. This article describes how to conduct minimal erythema dose (MED) testing. There is currently no easy way to determine an appropriate UV dose for clinical or research purposes without conducting formal MED testing, requiring observation hours after testing, or informal trial and error testing with the risks of under- or over-dosing. However, some alternative methods are discussed.
Medicine, Issue 75, Anatomy, Physiology, Dermatology, Analytical, Diagnostic, Therapeutic Techniques, Equipment, Health Care, Minimal erythema dose (MED) testing, skin sensitivity, ultraviolet radiation, spectrophotometry, UV exposure, psoriasis, acne, eczema, clinical techniques