Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab.
22 Related JoVE Articles!
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
A Novel Capsulorhexis Technique Using Shearing Forces with Cystotome
Institutions: Hairmyres Hospital, NHS Lanarkshire, Department of Ophthalmology, South Devon Healthcare NHS Trust.
To demonstrate a capsulorhexis technique using predominantly shearing forces with a cystotome on a virtual reality simulator and on a human eye.
Our technique involves creating the initial anterior capsular tear with a cystotome to raise a flap. The flap left unfolded on the lens surface. The cystotome tip is tilted horizontally and is engaged on the flap near the leading edge of the tear. The cystotome is moved in a circular fashion to direct the vector forces. The loose flap is constantly swept towards the centre so that it does not obscure the view on the tearing edge.
Our technique has the advantage of reducing corneal wound distortion and subsequent anterior chamber collapse. The capsulorhexis flap is moved away from the tear leading edge allowing better visualisation of the direction of tear. This technique offers superior control of the capsulorhexis by allowing the surgeon to change the direction of the tear to achieve the desired capsulorhexis size.
The EYESI Surgical Simulator is a realistic training platform for surgeons to practice complex capsulorhexis techniques. The shearing forces technique is a suitable alternative and in some cases a far better technique in achieving the desired capsulorhexis.
JoVE Medicine, Issue 39, Phacoemulsification surgery, cataract surgery, capsulorhexis, capsulotomy, technique, Continuous curvilinear capsulorhexis, cystotome
Controlling Parkinson's Disease With Adaptive Deep Brain Stimulation
Institutions: University of Oxford, UCL Institute of Neurology.
Adaptive deep brain stimulation (aDBS) has the potential to improve the treatment of Parkinson's disease by optimizing stimulation in real time according to fluctuating disease and medication state. In the present realization of adaptive DBS we record and stimulate from the DBS electrodes implanted in the subthalamic nucleus of patients with Parkinson's disease in the early post-operative period. Local field potentials are analogue filtered between 3 and 47 Hz before being passed to a data acquisition unit where they are digitally filtered again around the patient specific beta peak, rectified and smoothed to give an online reading of the beta amplitude. A threshold for beta amplitude is set heuristically, which, if crossed, passes a trigger signal to the stimulator. The stimulator then ramps up stimulation to a pre-determined clinically effective voltage over 250 msec and continues to stimulate until the beta amplitude again falls down below threshold. Stimulation continues in this manner with brief episodes of ramped DBS during periods of heightened beta power.
Clinical efficacy is assessed after a minimum period of stabilization (5 min) through the unblinded and blinded video assessment of motor function using a selection of scores from the Unified Parkinson's Rating Scale (UPDRS). Recent work has demonstrated a reduction in power consumption with aDBS as well as an improvement in clinical scores compared to conventional DBS. Chronic aDBS could now be trialed in Parkinsonism.
Medicine, Issue 89, Parkinson's, deep brain stimulation, adaptive, closed loop
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
Determination of Protein-ligand Interactions Using Differential Scanning Fluorimetry
Institutions: University of Exeter.
A wide range of methods are currently available for determining the dissociation constant between a protein and interacting small molecules. However, most of these require access to specialist equipment, and often require a degree of expertise to effectively establish reliable experiments and analyze data. Differential scanning fluorimetry (DSF) is being increasingly used as a robust method for initial screening of proteins for interacting small molecules, either for identifying physiological partners or for hit discovery. This technique has the advantage that it requires only a PCR machine suitable for quantitative PCR, and so suitable instrumentation is available in most institutions; an excellent range of protocols are already available; and there are strong precedents in the literature for multiple uses of the method. Past work has proposed several means of calculating dissociation constants from DSF data, but these are mathematically demanding. Here, we demonstrate a method for estimating dissociation constants from a moderate amount of DSF experimental data. These data can typically be collected and analyzed within a single day. We demonstrate how different models can be used to fit data collected from simple binding events, and where cooperative binding or independent binding sites are present. Finally, we present an example of data analysis in a case where standard models do not apply. These methods are illustrated with data collected on commercially available control proteins, and two proteins from our research program. Overall, our method provides a straightforward way for researchers to rapidly gain further insight into protein-ligand interactions using DSF.
Biophysics, Issue 91, differential scanning fluorimetry, dissociation constant, protein-ligand interactions, StepOne, cooperativity, WcbI.
Measuring Neural and Behavioral Activity During Ongoing Computerized Social Interactions: An Examination of Event-Related Brain Potentials
Institutions: Illinois Wesleyan University.
Social exclusion is a complex social phenomenon with powerful negative consequences. Given the impact of social exclusion on mental and emotional health, an understanding of how perceptions of social exclusion develop over the course of a social interaction is important for advancing treatments aimed at lessening the harmful costs of being excluded. To date, most scientific examinations of social exclusion have looked at exclusion after a social interaction has been completed. While this has been very helpful in developing an understanding of what happens to a person following exclusion, it has not helped to clarify the moment-to-moment dynamics of the process of social exclusion. Accordingly, the current protocol was developed to obtain an improved understanding of social exclusion by examining the patterns of event-related brain activation that are present during social interactions. This protocol allows greater precision and sensitivity in detailing the social processes that lead people to feel as though they have been excluded from a social interaction. Importantly, the current protocol can be adapted to include research projects that vary the nature of exclusionary social interactions by altering how frequently participants are included, how long the periods of exclusion will last in each interaction, and when exclusion will take place during the social interactions. Further, the current protocol can be used to examine variables and constructs beyond those related to social exclusion. This capability to address a variety of applications across psychology by obtaining both neural and behavioral data during ongoing social interactions suggests the present protocol could be at the core of a developing area of scientific inquiry related to social interactions.
Behavior, Issue 93, Event-related brain potentials (ERPs), Social Exclusion, Neuroscience, N2, P3, Cognitive Control
Internalization and Observation of Fluorescent Biomolecules in Living Microorganisms via Electroporation
Institutions: University of Oxford, Genome Center.
The ability to study biomolecules in vivo
is crucial for understanding their function in a biological context. One powerful approach involves fusing molecules of interest to fluorescent proteins such as GFP to study their expression, localization and function. However, GFP and its derivatives are significantly larger and less photostable than organic fluorophores generally used for in vitro
experiments, and this can limit the scope of investigation.
We recently introduced a straightforward, versatile and high-throughput method based on electroporation, allowing the internalization of biomolecules labeled with organic fluorophores into living microorganisms. Here we describe how to use electroporation to internalize labeled DNA fragments or proteins into Escherichia coli
and Saccharomyces cerevisiæ
, how to quantify the number of internalized molecules using fluorescence microscopy, and how to quantify the viability of electroporated cells. Data can be acquired at the single-cell or single-molecule level using fluorescence or FRET. The possibility of internalizing non-labeled molecules that trigger a physiological observable response in vivo
is also presented. Finally, strategies of optimization of the protocol for specific biological systems are discussed.
Microbiology, Issue 96, Electroporation, fluorescence, FRET, in vivo, single-molecule imaging, bacteria, Escherichia coli, yeast, internalization, labeled DNA, labeled proteins
Adapting Human Videofluoroscopic Swallow Study Methods to Detect and Characterize Dysphagia in Murine Disease Models
Institutions: University of Missouri, University of Missouri, University of Missouri.
This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physiology. Elimination of one or more of these components will have a detrimental impact on the study results. Moreover, the energy level capability of the fluoroscopy system will determine which swallow parameters can be investigated. Most research centers have high energy fluoroscopes designed for use with people and larger animals, which results in exceptionally poor image quality when testing mice and other small rodents. Despite this limitation, we have identified seven VFSS parameters that are consistently quantifiable in mice when using a high energy fluoroscope in combination with the new murine VFSS protocol. We recently obtained a low energy fluoroscopy system with exceptionally high imaging resolution and magnification capabilities that was designed for use with mice and other small rodents. Preliminary work using this new system, in combination with the new murine VFSS protocol, has identified 13 swallow parameters that are consistently quantifiable in mice, which is nearly double the number obtained using conventional (i.e.,
high energy) fluoroscopes. Identification of additional swallow parameters is expected as we optimize the capabilities of this new system. Results thus far demonstrate the utility of using a low energy fluoroscopy system to detect and quantify subtle changes in swallow physiology that may otherwise be overlooked when using high energy fluoroscopes to investigate murine disease models.
Medicine, Issue 97, mouse, murine, rodent, swallowing, deglutition, dysphagia, videofluoroscopy, radiation, iohexol, barium, palatability, taste, translational, disease models
Real-time Electrophysiology: Using Closed-loop Protocols to Probe Neuronal Dynamics and Beyond
Institutions: University of Antwerp.
Experimental neuroscience is witnessing an increased interest in the development and application of novel and often complex, closed-loop protocols, where the stimulus applied depends in real-time on the response of the system. Recent applications range from the implementation of virtual reality systems for studying motor responses both in mice1
and in zebrafish2
, to control of seizures following cortical stroke using optogenetics3
. A key advantage of closed-loop techniques resides in the capability of probing higher dimensional properties that are not directly accessible or that depend on multiple variables, such as neuronal excitability4
and reliability, while at the same time maximizing the experimental throughput. In this contribution and in the context of cellular electrophysiology, we describe how to apply a variety of closed-loop protocols to the study of the response properties of pyramidal cortical neurons, recorded intracellularly with the patch clamp technique in acute brain slices from the somatosensory cortex of juvenile rats. As no commercially available or open source software provides all the features required for efficiently performing the experiments described here, a new software toolbox called LCG5
was developed, whose modular structure maximizes reuse of computer code and facilitates the implementation of novel experimental paradigms. Stimulation waveforms are specified using a compact meta-description and full experimental protocols are described in text-based configuration files. Additionally, LCG has a command-line interface that is suited for repetition of trials and automation of experimental protocols.
Neuroscience, Issue 100, Electrophysiology, cellular neurobiology, dynamic clamp, Active Electrode Compensation, command-line interface, real-time computing, closed-loop, scripted electrophysiology.
Studying Food Reward and Motivation in Humans
Institutions: University of Cambridge, University of Cambridge, University of Cambridge, Addenbrooke's Hospital.
A key challenge in studying reward processing in humans is to go beyond subjective self-report measures and quantify different aspects of reward such as hedonics, motivation, and goal value in more objective ways. This is particularly relevant for the understanding of overeating and obesity as well as their potential treatments. In this paper are described a set of measures of food-related motivation using handgrip force as a motivational measure. These methods can be used to examine changes in food related motivation with metabolic (satiety) and pharmacological manipulations and can be used to evaluate interventions targeted at overeating and obesity. However to understand food-related decision making in the complex food environment it is essential to be able to ascertain the reward goal values that guide the decisions and behavioral choices that people make. These values are hidden but it is possible to ascertain them more objectively using metrics such as the willingness to pay and a method for this is described. Both these sets of methods provide quantitative measures of motivation and goal value that can be compared within and between individuals.
Behavior, Issue 85, Food reward, motivation, grip force, willingness to pay, subliminal motivation
Measuring Oral Fatty Acid Thresholds, Fat Perception, Fatty Food Liking, and Papillae Density in Humans
Institutions: Deakin University.
Emerging evidence from a number of laboratories indicates that humans have the ability to identify fatty acids in the oral cavity, presumably via fatty acid receptors housed on taste cells. Previous research has shown that an individual's oral sensitivity to fatty acid, specifically oleic acid (C18:1) is associated with body mass index (BMI), dietary fat consumption, and the ability to identify fat in foods. We have developed a reliable and reproducible method to assess oral chemoreception of fatty acids, using a milk and C18:1 emulsion, together with an ascending forced choice triangle procedure. In parallel, a food matrix has been developed to assess an individual's ability to perceive fat, in addition to a simple method to assess fatty food liking. As an added measure tongue photography is used to assess papillae density, with higher density often being associated with increased taste sensitivity.
Neuroscience, Issue 88, taste, overweight and obesity, dietary fat, fatty acid, diet, fatty food liking, detection threshold
Engineering Platform and Experimental Protocol for Design and Evaluation of a Neurally-controlled Powered Transfemoral Prosthesis
Institutions: North Carolina State University & University of North Carolina at Chapel Hill, University of North Carolina School of Medicine, Atlantic Prosthetics & Orthotics, LLC.
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
Biomedical Engineering, Issue 89, neural control, powered transfemoral prosthesis, electromyography (EMG), neural-machine interface, experimental setup and protocol
Combining Behavioral Endocrinology and Experimental Economics: Testosterone and Social Decision Making
Institutions: University of Zurich, Royal Holloway, University of London.
Behavioral endocrinological research in humans as well as in animals suggests that testosterone plays a key role in social interactions. Studies in rodents have shown a direct link between testosterone and aggressive behavior1
and folk wisdom adapts these findings to humans, suggesting that testosterone induces antisocial, egoistic or even aggressive behavior2
. However, many researchers doubt a direct testosterone-aggression link in humans, arguing instead that testosterone is primarily involved in status-related behavior3,4
. As a high status can also be achieved by aggressive and antisocial means it can be difficult to distinguish between anti-social and status seeking behavior.
We therefore set up an experimental environment, in which status can only be achieved by prosocial means. In a double-blind and placebo-controlled experiment, we administered a single sublingual dose of 0.5 mg of testosterone (with a hydroxypropyl-β-cyclodextrin carrier) to 121 women and investigated their social interaction behavior in an economic bargaining paradigm. Real monetary incentives are at stake in this paradigm; every player A receives a certain amount of money and has to make an offer to another player B on how to share the money. If B accepts, she gets what was offered and player A keeps the rest. If B refuses the offer, nobody gets anything. A status seeking player A is expected to avoid being rejected by behaving in a prosocial way, i.e. by making higher offers.
The results show that if expectations about the hormone are controlled for, testosterone administration leads to a significant increase in fair bargaining offers compared to placebo. The role of expectations is reflected in the fact that subjects who report that they believe to have received testosterone make lower offers than those who say they believe that they were treated with a placebo. These findings suggest that the experimental economics approach is sensitive for detecting neurobiological effects as subtle as those achieved by administration of hormones. Moreover, the findings point towards the importance of both psychosocial as well as neuroendocrine factors in determining the influence of testosterone on human social behavior.
Neuroscience, Issue 49, behavioral endocrinology, testosterone, social status, decision making
A Novel Technique of Rescuing Capsulorhexis Radial Tear-out using a Cystotome
Institutions: Hairmyres Hospital, NHS Lanarkshire, Royal Devon and Exeter NHS Foundation Trust, National Institute of Ophthalmology, South Devon Healthcare NHS Trust.
Part 1 : Purpose: To demonstrate a capsulorhexis radial tear out rescue technique using a cystotome on a virtual reality cataract surgery simulator and in a human eye. Part 2 : Method: Steps: When a capsulorhexis begins to veer radially towards the periphery beyond the pupillary margin the following steps should be applied without delay. 2.1) Stop further capsulorhexis manoeuvre and reassess the situation. 2.2) Fill the anterior chamber with ophthalmic viscosurgical device (OVD). We recommend mounting the cystotome to a syringe containing OVD so that the anterior chamber can be reinflated rapidly. 2.3) The capsulorhexis flap is then left unfolded on the lens surface. 2.4) The cystotome tip is tilted horizontally to avoid cutting or puncturing the flap and is engaged on the flap near the leading edge of the tear but not too close to the point of tear. 2.5) Gently push or pull the leading edge of tear opposite to the direction of tear. 2.6) The leading tearing edge will start to do a 'U-Turn'. Maintain the tension on the flap until the tearing edge returns to the desired trajectory. Part 3 : Results: Using our technique, a surgeon can respond instantly to radial tear out without having to change surgical instruments. Changing surgical instruments at this critical stage runs a risk of further radial tear due to sudden shallowing of anterior chamber as a result of forward pressure from the vitreous. Our technique also has the advantage of reducing corneal wound distortion and subsequent anterior chamber collapse. Part 4 : Discussion The EYESI Surgical Simulator is a realistic training platform for surgeons to practice complex capsulorhexis tear-out techniques. Capsulorhexis is the most important and complex part of phacoemulsification and endocapsular intraocular lens implantation procedure. A successful cataract surgery depends on achieving a good capsulorhexis. During capsulorhexis, surgeons may face a challenging situation like a capsulorhexis radial tear-out. A surgeon must learn to tackle the problem promptly without making the situation worse. Some other methods of rescuing the situation have been described using a capsulorhexis forceps. However, we believe our method is quicker, more effective and easier to manipulate as demonstrated on the EYESi surgical simulator and on a human eye. Acknowledgments: List acknowledgements and funding sources. We would like to thank Dr. Wael El Gendy, for video clip. Disclosures: describe potential conflicting interests or state We have nothing to disclose. References: 1. Brian C. Little, Jennifer H. Smith, Mark Packer. J Cataract Refract Surg 2006; 32:1420 1422, Issue-9. 2. Neuhann T. Theorie und Operationstechnik der Kapsulorhexis. Klin Monatsbl Augenheilkd. 1987; 1990: 542-545. 3. Gimbel HV, Neuhann T. Development, advantages and methods of the continuous circular capsulorhexis technique. J Cataract Refract Surg. 1990; 16: 31-37. 4. Gimbel HV, Neuhann T. Continuous curvilinear capsulorhexis. (letter) J Cataract Refract Sur. 1991; 17: 110-111.
Medicine, Issue 47, Phacoemulsification surgery, cataract surgery, capsulorhexis, capsulotomy, technique, Continuous curvilinear capsulorhexis, cystotome, capsulorhexis radial tear, capulorhexis COMPLICATION
Assaying Locomotor, Learning, and Memory Deficits in Drosophila Models of Neurodegeneration
Institutions: University of Miami, Miller School of Medicine.
Advances in genetic methods have enabled the study of genes involved in human neurodegenerative diseases using Drosophila
as a model system1
. Most of these diseases, including Alzheimer's, Parkinson's and Huntington's disease are characterized by age-dependent deterioration in learning and memory functions and movement coordination2
. Here we use behavioral assays, including the negative geotaxis assay3
and the aversive phototaxic suppression assay (APS assay)4,5
, to show that some of the behavior characteristics associated with human neurodegeneration can be recapitulated in flies. In the negative geotaxis assay, the natural tendency of flies to move against gravity when agitated is utilized to study genes or conditions that may hinder locomotor capacities. In the APS assay, the learning and memory functions are tested in positively-phototactic flies trained to associate light with aversive bitter taste and hence avoid this otherwise natural tendency to move toward light. Testing these trained flies 6 hours post-training is used to assess memory functions. Using these assays, the contribution of any genetic or environmental factors toward developing neurodegeneration can be easily studied in flies.
Neuroscience, Issue 49, Geotaxis, phototaxis, behavior, Tau
Hyponeophagia: A Measure of Anxiety in the Mouse
Institutions: University of Oxford.
Before the present day, when fast-acting and potent rodenticides such as alpha-chloralose were not yet in use, the work of pest controllers was often hampered by a phenomenon known as "bait shyness". Mice and rats cannot vomit, due to the tightness of the cardiac sphincter of the stomach, so to overcome the problem of potential food toxicity they have evolved a strategy of first ingesting only very small amounts of novel substances. The amounts ingested then gradually increase until the animal has determined whether the substance is safe and nutritious. So the old rat-catchers would first put a palatable substance such as oatmeal, which was to be the vehicle for the toxin, in the infested area. Only when large amounts were being readily consumed would they then add the poison, in amounts calculated not to affect the taste of the vehicle. The poisoned bait, which the animals were now readily eating in large amounts, would then swiftly perform its function.
Bait shyness is now used in the behavioural laboratory as a way of measuring anxiety. A highly palatable but novel substance, such as sweet corn, nuts or sweetened condensed milk, is offered to the mice (or rats) in a novel situation, such as a new cage. The latency to consume a defined amount of the new food is then measured.
Robert M.J. Deacon can be reach at firstname.lastname@example.org
Neuroscience, Issue 51, Anxiety, hyponeophagia, bait shyness, mice, hippocampus, strain differences, plus-maze
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
A Methodological Approach to Non-invasive Assessments of Vascular Function and Morphology
Institutions: Bangor University, Russells Hall Hospital, University of Manchester.
The endothelium is the innermost lining of the vasculature and is involved in the maintenance of vascular homeostasis. Damage to the endothelium may predispose the vessel to atherosclerosis and increase the risk for cardiovascular disease. Assessments of peripheral endothelial function are good indicators of early abnormalities in the vascular wall and correlate well with assessments of coronary endothelial function. The present manuscript details the important methodological steps necessary for the assessment of microvascular endothelial function using laser Doppler imaging with iontophoresis, large vessel endothelial function using flow-mediated dilatation, and carotid atherosclerosis using carotid artery ultrasound. A discussion on the methodological considerations for each of the techniques is also presented, and recommendations are made for future research.
Medicine, Issue 96, Endothelium, Cardiovascular, Flow-mediated dilatation, Carotid intima-media thickness, Atherosclerosis, Nitric oxide, Microvasculature, Laser Doppler Imaging