Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
24 Related JoVE Articles!
A Noninvasive Hair Sampling Technique to Obtain High Quality DNA from Elusive Small Mammals
Institutions: University of British Columbia, Okanagan Campus.
Noninvasive genetic sampling approaches are becoming increasingly important to study wildlife populations. A number of studies have reported using noninvasive sampling techniques to investigate population genetics and demography of wild populations1
. This approach has proven to be especially useful when dealing with rare or elusive species2
. While a number of these methods have been developed to sample hair, feces and other biological material from carnivores and medium-sized mammals, they have largely remained untested in elusive small mammals. In this video, we present a novel, inexpensive and noninvasive hair snare targeted at an elusive small mammal, the American pika (Ochotona princeps
). We describe the general set-up of the hair snare, which consists of strips of packing tape arranged in a web-like fashion and placed along travelling routes in the pikas’ habitat. We illustrate the efficiency of the snare at collecting a large quantity of hair that can then be collected and brought back to the lab. We then demonstrate the use of the DNA IQ system (Promega) to isolate DNA and showcase the utility of this method to amplify commonly used molecular markers including nuclear microsatellites, amplified fragment length polymorphisms (AFLPs), mitochondrial sequences (800bp) as well as a molecular sexing marker. Overall, we demonstrate the utility of this novel noninvasive hair snare as a sampling technique for wildlife population biologists. We anticipate that this approach will be applicable to a variety of small mammals, opening up areas of investigation within natural populations, while minimizing impact to study organisms.
Genetics, Issue 49, Conservation genetics, noninvasive genetic sampling, Hair snares, Microsatellites, AFLPs, American pika, Ochotona princeps
Measuring the Subjective Value of Risky and Ambiguous Options using Experimental Economics and Functional MRI Methods
Institutions: Yale School of Medicine, Yale School of Medicine, New York University , New York University , New York University .
Most of the choices we make have uncertain consequences. In some cases the probabilities for different possible outcomes are precisely known, a condition termed "risky". In other cases when probabilities cannot be estimated, this is a condition described as "ambiguous". While most people are averse to both risk and ambiguity1,2
, the degree of those aversions vary substantially across individuals, such that the subjective value
of the same risky or ambiguous option can be very different for different individuals. We combine functional MRI (fMRI) with an experimental economics-based method3
to assess the neural representation of the subjective values of risky and ambiguous options4
. This technique can be now used to study these neural representations in different populations, such as different age groups and different patient populations.
In our experiment, subjects make consequential choices between two alternatives while their neural activation is tracked using fMRI. On each trial subjects choose between lotteries that vary in their monetary amount and in either the probability of winning that amount or the ambiguity level associated with winning. Our parametric design allows us to use each individual's choice behavior to estimate their attitudes towards risk and ambiguity, and thus to estimate the subjective values that each option held for them. Another important feature of the design is that the outcome of the chosen lottery is not revealed during the experiment, so that no learning can take place, and thus the ambiguous options remain ambiguous and risk attitudes are stable. Instead, at the end of the scanning session one or few trials are randomly selected and played for real money. Since subjects do not know beforehand which trials will be selected, they must treat each and every trial as if it and it alone was the one trial on which they will be paid. This design ensures that we can estimate the true subjective value of each option to each subject. We then look for areas in the brain whose activation is correlated with the subjective value of risky options and for areas whose activation is correlated with the subjective value of ambiguous options.
Neuroscience, Issue 67, Medicine, Molecular Biology, fMRI, magnetic resonance imaging, decision-making, value, uncertainty, risk, ambiguity
Heterogeneity Mapping of Protein Expression in Tumors using Quantitative Immunofluorescence
Institutions: University of Edinburgh, HistoRx Inc..
Morphologic heterogeneity within an individual tumor is well-recognized by histopathologists in surgical practice. While this often takes the form of areas of distinct differentiation into recognized histological subtypes, or different pathological grade, often there are more subtle differences in phenotype which defy accurate classification (Figure 1). Ultimately, since morphology is dictated by the underlying molecular phenotype, areas with visible differences are likely to be accompanied by differences in the expression of proteins which orchestrate cellular function and behavior, and therefore, appearance. The significance of visible and invisible (molecular) heterogeneity for prognosis is unknown, but recent evidence suggests that, at least at the genetic level, heterogeneity exists in the primary tumor1,2
, and some of these sub-clones give rise to metastatic (and therefore lethal) disease.
Moreover, some proteins are measured as biomarkers because they are the targets of therapy (for instance ER and HER2 for tamoxifen and trastuzumab (Herceptin), respectively). If these proteins show variable expression within a tumor then therapeutic responses may also be variable. The widely used histopathologic scoring schemes for immunohistochemistry either ignore, or numerically homogenize the quantification of protein expression. Similarly, in destructive techniques, where the tumor samples are homogenized (such as gene expression profiling), quantitative information can be elucidated, but spatial information is lost. Genetic heterogeneity mapping approaches in pancreatic cancer have relied either on generation of a single cell suspension3
, or on macrodissection4
. A recent study has used quantum dots in order to map morphologic and molecular heterogeneity in prostate cancer tissue5
, providing proof of principle that morphology and molecular mapping is feasible, but falling short of quantifying the heterogeneity. Since immunohistochemistry is, at best, only semi-quantitative and subject to intra- and inter-observer bias, more sensitive and quantitative methodologies are required in order to accurately map and quantify tissue heterogeneity in situ
We have developed and applied an experimental and statistical methodology in order to systematically quantify the heterogeneity of protein expression in whole tissue sections of tumors, based on the Automated QUantitative Analysis (AQUA) system6
. Tissue sections are labeled with specific antibodies directed against cytokeratins and targets of interest, coupled to fluorophore-labeled secondary antibodies. Slides are imaged using a whole-slide fluorescence scanner. Images are subdivided into hundreds to thousands of tiles, and each tile is then assigned an AQUA score which is a measure of protein concentration within the epithelial (tumor) component of the tissue. Heatmaps are generated to represent tissue expression of the proteins and a heterogeneity score assigned, using a statistical measure of heterogeneity originally used in ecology, based on the Simpson's biodiversity index7
To date there have been no attempts to systematically map and quantify this variability in tandem with protein expression, in histological preparations. Here, we illustrate the first use of the method applied to ER and HER2 biomarker expression in ovarian cancer. Using this method paves the way for analyzing heterogeneity as an independent variable in studies of biomarker expression in translational studies, in order to establish the significance of heterogeneity in prognosis and prediction of responses to therapy.
Medicine, Issue 56, quantitative immunofluorescence, heterogeneity, cancer, biomarker, targeted therapy, immunohistochemistry, proteomics, histopathology
Stereotactic Radiosurgery for Gynecologic Cancer
Institutions: University Hospitals Case Medical Center and Case Western Reserve University School of Medicine, University Hospitals Case Medical Center and Case Western Reserve University School of Medicine.
Stereotactic body radiotherapy (SBRT) distinguishes itself by necessitating more rigid patient immobilization, accounting for respiratory motion, intricate treatment planning, on-board imaging, and reduced number of ablative radiation doses to cancer targets usually refractory to chemotherapy and conventional radiation. Steep SBRT radiation dose drop-off permits narrow 'pencil beam' treatment fields to be used for ablative radiation treatment condensed into 1 to 3 treatments.
Treating physicians must appreciate that SBRT comes at a bigger danger of normal tissue injury and chance of geographic tumor miss. Both must be tackled by immobilization of cancer targets and by high-precision treatment delivery. Cancer target immobilization has been achieved through use of indexed customized Styrofoam casts, evacuated bean bags, or body-fix molds with patient-independent abdominal compression.1-3
Intrafraction motion of cancer targets due to breathing now can be reduced by patient-responsive breath hold techniques,4
patient mouthpiece active breathing coordination,5
respiration-correlated computed tomography,6
or image-guided tracking of fiducials implanted within and around a moving tumor.7-9
The Cyberknife system (Accuray [Sunnyvale, CA]) utilizes a radiation linear accelerator mounted on a industrial robotic arm that accurately follows patient respiratory motion by a camera-tracked set of light-emitting diodes (LED) impregnated on a vest fitted to a patient.10
Substantial reductions in radiation therapy margins can be achieved by motion tracking, ultimately rendering a smaller planning target volumes that are irradiated with submillimeter accuracy.11-13
Cancer targets treated by SBRT are irradiated by converging, tightly collimated beams. Resultant radiation dose to cancer target volume histograms have a more pronounced radiation "shoulder" indicating high percentage target coverage and a small high-dose radiation "tail." Thus, increased target conformality comes at the expense of decreased dose uniformity in the SBRT cancer target. This may have implications for both subsequent tumor control in the SBRT target and normal tissue tolerance of organs at-risk. Due to the sharp dose falloff in SBRT, the possibility of occult disease escaping ablative radiation dose occurs when cancer targets are not fully recognized and inadequate SBRT dose margins are applied. Clinical target volume (CTV) expansion by 0.5 cm, resulting in a larger planning target volume (PTV), is associated with increased target control without undue normal tissue injury.7,8
Further reduction in the probability of geographic miss may be achieved by incorporation of 2-[18
F-FDG) positron emission tomography (PET).8
Use of 18
F-FDG PET/CT in SBRT treatment planning is only the beginning of attempts to discover new imaging target molecular signatures for gynecologic cancers.
Medicine, Issue 62, radiosurgery, Cyberknife stereotactic radiosurgery, radiation, ovarian cancer, cervix cancer
A Protocol for Computer-Based Protein Structure and Function Prediction
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo
. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls.
DTI data analysis is performed in a variate fashion, i.e.
voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e.
differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels.
In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
Functional Interrogation of Adult Hypothalamic Neurogenesis with Focal Radiological Inhibition
Institutions: California Institute of Technology, Johns Hopkins University School of Medicine, Johns Hopkins University School of Medicine, University Of Washington Medical Center, Johns Hopkins University School of Medicine.
The functional characterization of adult-born neurons remains a significant challenge. Approaches to inhibit adult neurogenesis via invasive viral delivery or transgenic animals have potential confounds that make interpretation of results from these studies difficult. New radiological tools are emerging, however, that allow one to noninvasively investigate the function of select groups of adult-born neurons through accurate and precise anatomical targeting in small animals. Focal ionizing radiation inhibits the birth and differentiation of new neurons, and allows targeting of specific neural progenitor regions. In order to illuminate the potential functional role that adult hypothalamic neurogenesis plays in the regulation of physiological processes, we developed a noninvasive focal irradiation technique to selectively inhibit the birth of adult-born neurons in the hypothalamic median eminence. We describe a method for C
omputer tomography-guided f
radiation (CFIR) delivery to enable precise and accurate anatomical targeting in small animals. CFIR uses three-dimensional volumetric image guidance for localization and targeting of the radiation dose, minimizes radiation exposure to nontargeted brain regions, and allows for conformal dose distribution with sharp beam boundaries. This protocol allows one to ask questions regarding the function of adult-born neurons, but also opens areas to questions in areas of radiobiology, tumor biology, and immunology. These radiological tools will facilitate the translation of discoveries at the bench to the bedside.
Neuroscience, Issue 81, Neural Stem Cells (NSCs), Body Weight, Radiotherapy, Image-Guided, Metabolism, Energy Metabolism, Neurogenesis, Cell Proliferation, Neurosciences, Irradiation, Radiological treatment, Computer-tomography (CT) imaging, Hypothalamus, Hypothalamic Proliferative Zone (HPZ), Median Eminence (ME), Small Animal Radiation Research Platform (SARRP)
Multimodal Optical Microscopy Methods Reveal Polyp Tissue Morphology and Structure in Caribbean Reef Building Corals
Institutions: University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
An integrated suite of imaging techniques has been applied to determine the three-dimensional (3D) morphology and cellular structure of polyp tissues comprising the Caribbean reef building corals Montastraeaannularis
and M. faveolata
. These approaches include fluorescence microscopy (FM), serial block face imaging (SBFI), and two-photon confocal laser scanning microscopy (TPLSM). SBFI provides deep tissue imaging after physical sectioning; it details the tissue surface texture and 3D visualization to tissue depths of more than 2 mm. Complementary FM and TPLSM yield ultra-high resolution images of tissue cellular structure. Results have: (1) identified previously unreported lobate tissue morphologies on the outer wall of individual coral polyps and (2) created the first surface maps of the 3D distribution and tissue density of chromatophores and algae-like dinoflagellate zooxanthellae
endosymbionts. Spectral absorption peaks of 500 nm and 675 nm, respectively, suggest that M. annularis
and M. faveolata
contain similar types of chlorophyll and chromatophores. However, M. annularis
and M. faveolata
exhibit significant differences in the tissue density and 3D distribution of these key cellular components. This study focusing on imaging methods indicates that SBFI is extremely useful for analysis of large mm-scale samples of decalcified coral tissues. Complimentary FM and TPLSM reveal subtle submillimeter scale changes in cellular distribution and density in nondecalcified coral tissue samples. The TPLSM technique affords: (1) minimally invasive sample preparation, (2) superior optical sectioning ability, and (3) minimal light absorption and scattering, while still permitting deep tissue imaging.
Environmental Sciences, Issue 91, Serial block face imaging, two-photon fluorescence microscopy, Montastraea annularis, Montastraea faveolata, 3D coral tissue morphology and structure, zooxanthellae, chromatophore, autofluorescence, light harvesting optimization, environmental change
Flat-floored Air-lifted Platform: A New Method for Combining Behavior with Microscopy or Electrophysiology on Awake Freely Moving Rodents
Institutions: University of Helsinki, Neurotar LTD, University of Eastern Finland, University of Helsinki.
It is widely acknowledged that the use of general anesthetics can undermine the relevance of electrophysiological or microscopical data obtained from a living animal’s brain. Moreover, the lengthy recovery from anesthesia limits the frequency of repeated recording/imaging episodes in longitudinal studies. Hence, new methods that would allow stable recordings from non-anesthetized behaving mice are expected to advance the fields of cellular and cognitive neurosciences. Existing solutions range from mere physical restraint to more sophisticated approaches, such as linear and spherical treadmills used in combination with computer-generated virtual reality. Here, a novel method is described where a head-fixed mouse can move around an air-lifted mobile homecage and explore its environment under stress-free conditions. This method allows researchers to perform behavioral tests (e.g.
, learning, habituation or novel object recognition) simultaneously with two-photon microscopic imaging and/or patch-clamp recordings, all combined in a single experiment. This video-article describes the use of the awake animal head fixation device (mobile homecage), demonstrates the procedures of animal habituation, and exemplifies a number of possible applications of the method.
Empty Value, Issue 88, awake, in vivo two-photon microscopy, blood vessels, dendrites, dendritic spines, Ca2+ imaging, intrinsic optical imaging, patch-clamp
Comprehensive Analysis of Transcription Dynamics from Brain Samples Following Behavioral Experience
Institutions: The Hebrew University of Jerusalem.
The encoding of experiences in the brain and the consolidation of long-term memories depend on gene transcription. Identifying the function of specific genes in encoding experience is one of the main objectives of molecular neuroscience. Furthermore, the functional association of defined genes with specific behaviors has implications for understanding the basis of neuropsychiatric disorders. Induction of robust transcription programs has been observed in the brains of mice following various behavioral manipulations. While some genetic elements are utilized recurrently following different behavioral manipulations and in different brain nuclei, transcriptional programs are overall unique to the inducing stimuli and the structure in which they are studied1,2
In this publication, a protocol is described for robust and comprehensive transcriptional profiling from brain nuclei of mice in response to behavioral manipulation. The protocol is demonstrated in the context of analysis of gene expression dynamics in the nucleus accumbens following acute cocaine experience. Subsequent to a defined in vivo
experience, the target neural tissue is dissected; followed by RNA purification, reverse transcription and utilization of microfluidic arrays for comprehensive qPCR analysis of multiple target genes. This protocol is geared towards comprehensive analysis (addressing 50-500 genes) of limiting quantities of starting material, such as small brain samples or even single cells.
The protocol is most advantageous for parallel analysis of multiple samples (e.g.
single cells, dynamic analysis following pharmaceutical, viral or behavioral perturbations). However, the protocol could also serve for the characterization and quality assurance of samples prior to whole-genome studies by microarrays or RNAseq, as well as validation of data obtained from whole-genome studies.
Behavior, Issue 90,
Brain, behavior, RNA, transcription, nucleus accumbens, cocaine, high-throughput qPCR, experience-dependent plasticity, gene regulatory networks, microdissection
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding.
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Unraveling the Unseen Players in the Ocean - A Field Guide to Water Chemistry and Marine Microbiology
Institutions: San Diego State University, University of California San Diego.
Here we introduce a series of thoroughly tested and well standardized research protocols adapted for use in remote marine environments. The sampling protocols include the assessment of resources available to the microbial community (dissolved organic carbon, particulate organic matter, inorganic nutrients), and a comprehensive description of the viral and bacterial communities (via direct viral and microbial counts, enumeration of autofluorescent microbes, and construction of viral and microbial metagenomes). We use a combination of methods, which represent a dispersed field of scientific disciplines comprising already established protocols and some of the most recent techniques developed. Especially metagenomic sequencing techniques used for viral and bacterial community characterization, have been established only in recent years, and are thus still subjected to constant improvement. This has led to a variety of sampling and sample processing procedures currently in use. The set of methods presented here provides an up to date approach to collect and process environmental samples. Parameters addressed with these protocols yield the minimum on information essential to characterize and understand the underlying mechanisms of viral and microbial community dynamics. It gives easy to follow guidelines to conduct comprehensive surveys and discusses critical steps and potential caveats pertinent to each technique.
Environmental Sciences, Issue 93, dissolved organic carbon, particulate organic matter, nutrients, DAPI, SYBR, microbial metagenomics, viral metagenomics, marine environment
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Simultaneous Multicolor Imaging of Biological Structures with Fluorescence Photoactivation Localization Microscopy
Institutions: University of Maine.
Localization-based super resolution microscopy can be applied to obtain a spatial map (image) of the distribution of individual fluorescently labeled single molecules within a sample with a spatial resolution of tens of nanometers. Using either photoactivatable (PAFP) or photoswitchable (PSFP) fluorescent proteins fused to proteins of interest, or organic dyes conjugated to antibodies or other molecules of interest, fluorescence photoactivation localization microscopy (FPALM) can simultaneously image multiple species of molecules within single cells. By using the following approach, populations of large numbers (thousands to hundreds of thousands) of individual molecules are imaged in single cells and localized with a precision of ~10-30 nm. Data obtained can be applied to understanding the nanoscale spatial distributions of multiple protein types within a cell. One primary advantage of this technique is the dramatic increase in spatial resolution: while diffraction limits resolution to ~200-250 nm in conventional light microscopy, FPALM can image length scales more than an order of magnitude smaller. As many biological hypotheses concern the spatial relationships among different biomolecules, the improved resolution of FPALM can provide insight into questions of cellular organization which have previously been inaccessible to conventional fluorescence microscopy. In addition to detailing the methods for sample preparation and data acquisition, we here describe the optical setup for FPALM. One additional consideration for researchers wishing to do super-resolution microscopy is cost: in-house setups are significantly cheaper than most commercially available imaging machines. Limitations of this technique include the need for optimizing the labeling of molecules of interest within cell samples, and the need for post-processing software to visualize results. We here describe the use of PAFP and PSFP expression to image two protein species in fixed cells. Extension of the technique to living cells is also described.
Basic Protocol, Issue 82, Microscopy, Super-resolution imaging, Multicolor, single molecule, FPALM, Localization microscopy, fluorescent proteins
High Throughput Quantitative Expression Screening and Purification Applied to Recombinant Disulfide-rich Venom Proteins Produced in E. coli
Institutions: Aix-Marseille Université, Commissariat à l'énergie atomique et aux énergies alternatives (CEA) Saclay, France.
Escherichia coli (E. coli)
is the most widely used expression system for the production of recombinant proteins for structural and functional studies. However, purifying proteins is sometimes challenging since many proteins are expressed in an insoluble form. When working with difficult or multiple targets it is therefore recommended to use high throughput (HTP) protein expression screening on a small scale (1-4 ml cultures) to quickly identify conditions for soluble expression. To cope with the various structural genomics programs of the lab, a quantitative (within a range of 0.1-100 mg/L culture of recombinant protein) and HTP protein expression screening protocol was implemented and validated on thousands of proteins. The protocols were automated with the use of a liquid handling robot but can also be performed manually without specialized equipment.
Disulfide-rich venom proteins are gaining increasing recognition for their potential as therapeutic drug leads. They can be highly potent and selective, but their complex disulfide bond networks make them challenging to produce. As a member of the FP7 European Venomics project (www.venomics.eu), our challenge is to develop successful production strategies with the aim of producing thousands of novel venom proteins for functional characterization. Aided by the redox properties of disulfide bond isomerase DsbC, we adapted our HTP production pipeline for the expression of oxidized, functional venom peptides in the E. coli
cytoplasm. The protocols are also applicable to the production of diverse disulfide-rich proteins. Here we demonstrate our pipeline applied to the production of animal venom proteins. With the protocols described herein it is likely that soluble disulfide-rich proteins will be obtained in as little as a week. Even from a small scale, there is the potential to use the purified proteins for validating the oxidation state by mass spectrometry, for characterization in pilot studies, or for sensitive micro-assays.
Bioengineering, Issue 89, E. coli, expression, recombinant, high throughput (HTP), purification, auto-induction, immobilized metal affinity chromatography (IMAC), tobacco etch virus protease (TEV) cleavage, disulfide bond isomerase C (DsbC) fusion, disulfide bonds, animal venom proteins/peptides
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion.
Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via
quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Aseptic Laboratory Techniques: Plating Methods
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories
(BMBL) as well as Material Safety Data Sheets
(MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection
(ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to:
● Perform plating procedures without contaminating media.
● Isolate single bacterial colonies by the streak-plating method.
● Use pour-plating and spread-plating methods to determine the concentration of bacteria.
● Perform soft agar overlays when working with phage.
● Transfer bacterial cells from one plate to another using the replica-plating procedure.
● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Institutions: Princeton University.
The aim of de novo
protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo
protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity.
To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
Using SCOPE to Identify Potential Regulatory Motifs in Coregulated Genes
Institutions: Dartmouth College.
SCOPE is an ensemble motif finder that uses three component algorithms in parallel to identify potential regulatory motifs by over-representation and motif position preference1
. Each component algorithm is optimized to find a different kind of motif. By taking the best of these three approaches, SCOPE performs better than any single algorithm, even in the presence of noisy data1
. In this article, we utilize a web version of SCOPE2
to examine genes that are involved in telomere maintenance. SCOPE has been incorporated into at least two other motif finding programs3,4
and has been used in other studies5-8
The three algorithms that comprise SCOPE are BEAM9
, which finds non-degenerate motifs (ACCGGT), PRISM10
, which finds degenerate motifs (ASCGWT), and SPACER11
, which finds longer bipartite motifs (ACCnnnnnnnnGGT). These three algorithms have been optimized to find their corresponding type of motif. Together, they allow SCOPE to perform extremely well.
Once a gene set has been analyzed and candidate motifs identified, SCOPE can look for other genes that contain the motif which, when added to the original set, will improve the motif score. This can occur through over-representation or motif position preference. Working with partial gene sets that have biologically verified transcription factor binding sites, SCOPE was able to identify most of the rest of the genes also regulated by the given transcription factor.
Output from SCOPE shows candidate motifs, their significance, and other information both as a table and as a graphical motif map. FAQs and video tutorials are available at the SCOPE web site which also includes a "Sample Search" button that allows the user to perform a trial run.
Scope has a very friendly user interface that enables novice users to access the algorithm's full power without having to become an expert in the bioinformatics of motif finding. As input, SCOPE can take a list of genes, or FASTA sequences. These can be entered in browser text fields, or read from a file. The output from SCOPE contains a list of all identified motifs with their scores, number of occurrences, fraction of genes containing the motif, and the algorithm used to identify the motif. For each motif, result details include a consensus representation of the motif, a sequence logo, a position weight matrix, and a list of instances for every motif occurrence (with exact positions and "strand" indicated). Results are returned in a browser window and also optionally by email. Previous papers describe the SCOPE algorithms in detail1,2,9-11
Genetics, Issue 51, gene regulation, computational biology, algorithm, promoter sequence motif
Spatial Multiobjective Optimization of Agricultural Conservation Practices using a SWAT Model and an Evolutionary Algorithm
Institutions: University of Washington, Iowa State University, North Carolina A&T University, Iowa Geological and Water Survey.
Finding the cost-efficient (i.e.
, lowest-cost) ways of targeting conservation practice investments for the achievement of specific water quality goals across the landscape is of primary importance in watershed management. Traditional economics methods of finding the lowest-cost solution in the watershed context (e.g.
) assume that off-site impacts can be accurately described as a proportion of on-site pollution generated. Such approaches are unlikely to be representative of the actual pollution process in a watershed, where the impacts of polluting sources are often determined by complex biophysical processes. The use of modern physically-based, spatially distributed hydrologic simulation models allows for a greater degree of realism in terms of process representation but requires a development of a simulation-optimization framework where the model becomes an integral part of optimization.
Evolutionary algorithms appear to be a particularly useful optimization tool, able to deal with the combinatorial nature of a watershed simulation-optimization problem and allowing the use of the full water quality model. Evolutionary algorithms treat a particular spatial allocation of conservation practices in a watershed as a candidate solution and utilize sets (populations) of candidate solutions iteratively applying stochastic operators of selection, recombination, and mutation to find improvements with respect to the optimization objectives. The optimization objectives in this case are to minimize nonpoint-source pollution in the watershed, simultaneously minimizing the cost of conservation practices. A recent and expanding set of research is attempting to use similar methods and integrates water quality models with broadly defined evolutionary optimization methods3,4,9,10,13-15,17-19,22,23,25
. In this application, we demonstrate a program which follows Rabotyagov et al.'s approach and integrates a modern and commonly used SWAT water quality model7
with a multiobjective evolutionary algorithm SPEA226
, and user-specified set of conservation practices and their costs to search for the complete tradeoff frontiers between costs of conservation practices and user-specified water quality objectives. The frontiers quantify the tradeoffs faced by the watershed managers by presenting the full range of costs associated with various water quality improvement goals. The program allows for a selection of watershed configurations achieving specified water quality improvement goals and a production of maps of optimized placement of conservation practices.
Environmental Sciences, Issue 70, Plant Biology, Civil Engineering, Forest Sciences, Water quality, multiobjective optimization, evolutionary algorithms, cost efficiency, agriculture, development
Facilitating the Analysis of Immunological Data with Visual Analytic Techniques
Institutions: University of British Columbia, University of British Columbia, University of British Columbia.
Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.
Immunology, Issue 47, Visual analytics, flow cytometry, Luminex, Tableau, cytokine, innate immunity, single nucleotide polymorphism