JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Area under precision-recall curves for weighted and unweighted data.
PLoS ONE
PUBLISHED: 01-01-2014
Precision-recall curves are highly informative about the performance of binary classifiers, and the area under these curves is a popular scalar performance measure for comparing different classifiers. However, for many applications class labels are not provided with absolute certainty, but with some degree of confidence, often reflected by weights or soft labels assigned to data points. Computing the area under the precision-recall curve requires interpolating between adjacent supporting points, but previous interpolation schemes are not directly applicable to weighted data. Hence, even in cases where weights were available, they had to be neglected for assessing classifiers using precision-recall curves. Here, we propose an interpolation for precision-recall curves that can also be used for weighted data, and we derive conditions for classification scores yielding the maximum and minimum area under the precision-recall curve. We investigate accordances and differences of the proposed interpolation and previous ones, and we demonstrate that taking into account existing weights of test data is important for the comparison of classifiers.
Authors: Alessandra Sottini, Federico Serana, Diego Bertoli, Marco Chiarini, Monica Valotti, Marion Vaglio Tessitore, Luisa Imberti.
Published: 12-06-2014
ABSTRACT
T-cell receptor excision circles (TRECs) and K-deleting recombination excision circles (KRECs) are circularized DNA elements formed during recombination process that creates T- and B-cell receptors. Because TRECs and KRECs are unable to replicate, they are diluted after each cell division, and therefore persist in the cell. Their quantity in peripheral blood can be considered as an estimation of thymic and bone marrow output. By combining well established and commonly used TREC assay with a modified version of KREC assay, we have developed a duplex quantitative real-time PCR that allows quantification of both newly-produced T and B lymphocytes in a single assay. The number of TRECs and KRECs are obtained using a standard curve prepared by serially diluting TREC and KREC signal joints cloned in a bacterial plasmid, together with a fragment of T-cell receptor alpha constant gene that serves as reference gene. Results are reported as number of TRECs and KRECs/106 cells or per ml of blood. The quantification of these DNA fragments have been proven useful for monitoring immune reconstitution following bone marrow transplantation in both children and adults, for improved characterization of immune deficiencies, or for better understanding of certain immunomodulating drug activity.
21 Related JoVE Articles!
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Authors: Jason Bell, Edwin Dickinson, David R. Badcock, Frederick A. A. Kingdom.
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
50877
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
A Proboscis Extension Response Protocol for Investigating Behavioral Plasticity in Insects: Application to Basic, Biomedical, and Agricultural Research
Authors: Brian H. Smith, Christina M. Burden.
Institutions: Arizona State University.
Insects modify their responses to stimuli through experience of associating those stimuli with events important for survival (e.g., food, mates, threats). There are several behavioral mechanisms through which an insect learns salient associations and relates them to these events. It is important to understand this behavioral plasticity for programs aimed toward assisting insects that are beneficial for agriculture. This understanding can also be used for discovering solutions to biomedical and agricultural problems created by insects that act as disease vectors and pests. The Proboscis Extension Response (PER) conditioning protocol was developed for honey bees (Apis mellifera) over 50 years ago to study how they perceive and learn about floral odors, which signal the nectar and pollen resources a colony needs for survival. The PER procedure provides a robust and easy-to-employ framework for studying several different ecologically relevant mechanisms of behavioral plasticity. It is easily adaptable for use with several other insect species and other behavioral reflexes. These protocols can be readily employed in conjunction with various means for monitoring neural activity in the CNS via electrophysiology or bioimaging, or for manipulating targeted neuromodulatory pathways. It is a robust assay for rapidly detecting sub-lethal effects on behavior caused by environmental stressors, toxins or pesticides. We show how the PER protocol is straightforward to implement using two procedures. One is suitable as a laboratory exercise for students or for quick assays of the effect of an experimental treatment. The other provides more thorough control of variables, which is important for studies of behavioral conditioning. We show how several measures for the behavioral response ranging from binary yes/no to more continuous variable like latency and duration of proboscis extension can be used to test hypotheses. And, we discuss some pitfalls that researchers commonly encounter when they use the procedure for the first time.
Neuroscience, Issue 91, PER, conditioning, honey bee, olfaction, olfactory processing, learning, memory, toxin assay
51057
Play Button
Engineering Platform and Experimental Protocol for Design and Evaluation of a Neurally-controlled Powered Transfemoral Prosthesis
Authors: Fan Zhang, Ming Liu, Stephen Harper, Michael Lee, He Huang.
Institutions: North Carolina State University & University of North Carolina at Chapel Hill, University of North Carolina School of Medicine, Atlantic Prosthetics & Orthotics, LLC.
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
Biomedical Engineering, Issue 89, neural control, powered transfemoral prosthesis, electromyography (EMG), neural-machine interface, experimental setup and protocol
51059
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Authors: Marc N. Coutanche, Sharon L. Thompson-Schill.
Institutions: University of Pennsylvania.
It is now appreciated that condition-relevant information can be present within distributed patterns of functional magnetic resonance imaging (fMRI) brain activity, even for conditions with similar levels of univariate activation. Multi-voxel pattern (MVP) analysis has been used to decode this information with great success. FMRI investigators also often seek to understand how brain regions interact in interconnected networks, and use functional connectivity (FC) to identify regions that have correlated responses over time. Just as univariate analyses can be insensitive to information in MVPs, FC may not fully characterize the brain networks that process conditions with characteristic MVP signatures. The method described here, informational connectivity (IC), can identify regions with correlated changes in MVP-discriminability across time, revealing connectivity that is not accessible to FC. The method can be exploratory, using searchlights to identify seed-connected areas, or planned, between pre-selected regions-of-interest. The results can elucidate networks of regions that process MVP-related conditions, can breakdown MVPA searchlight maps into separate networks, or can be compared across tasks and patient groups.
Neuroscience, Issue 89, fMRI, MVPA, connectivity, informational connectivity, functional connectivity, networks, multi-voxel pattern analysis, decoding, classification, method, multivariate
51226
Play Button
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Authors: Wen-Ting Tsai, Ahmed Hassan, Purbasha Sarkar, Joaquin Correa, Zoltan Metlagel, Danielle M. Jorgens, Manfred Auer.
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
51673
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Diffusion Tensor Magnetic Resonance Imaging in the Analysis of Neurodegenerative Diseases
Authors: Hans-Peter Müller, Jan Kassubek.
Institutions: University of Ulm.
Diffusion tensor imaging (DTI) techniques provide information on the microstructural processes of the cerebral white matter (WM) in vivo. The present applications are designed to investigate differences of WM involvement patterns in different brain diseases, especially neurodegenerative disorders, by use of different DTI analyses in comparison with matched controls. DTI data analysis is performed in a variate fashion, i.e. voxelwise comparison of regional diffusion direction-based metrics such as fractional anisotropy (FA), together with fiber tracking (FT) accompanied by tractwise fractional anisotropy statistics (TFAS) at the group level in order to identify differences in FA along WM structures, aiming at the definition of regional patterns of WM alterations at the group level. Transformation into a stereotaxic standard space is a prerequisite for group studies and requires thorough data processing to preserve directional inter-dependencies. The present applications show optimized technical approaches for this preservation of quantitative and directional information during spatial normalization in data analyses at the group level. On this basis, FT techniques can be applied to group averaged data in order to quantify metrics information as defined by FT. Additionally, application of DTI methods, i.e. differences in FA-maps after stereotaxic alignment, in a longitudinal analysis at an individual subject basis reveal information about the progression of neurological disorders. Further quality improvement of DTI based results can be obtained during preprocessing by application of a controlled elimination of gradient directions with high noise levels. In summary, DTI is used to define a distinct WM pathoanatomy of different brain diseases by the combination of whole brain-based and tract-based DTI analysis.
Medicine, Issue 77, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Anatomy, Physiology, Neurodegenerative Diseases, nuclear magnetic resonance, NMR, MR, MRI, diffusion tensor imaging, fiber tracking, group level comparison, neurodegenerative diseases, brain, imaging, clinical techniques
50427
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
An Investigation of the Effects of Sports-related Concussion in Youth Using Functional Magnetic Resonance Imaging and the Head Impact Telemetry System
Authors: Michelle Keightley, Stephanie Green, Nick Reed, Sabrina Agnihotri, Amy Wilkinson, Nancy Lobaugh.
Institutions: University of Toronto, University of Toronto, University of Toronto, Bloorview Kids Rehab, Toronto Rehab, Sunnybrook Health Sciences Centre, University of Toronto.
One of the most commonly reported injuries in children who participate in sports is concussion or mild traumatic brain injury (mTBI)1. Children and youth involved in organized sports such as competitive hockey are nearly six times more likely to suffer a severe concussion compared to children involved in other leisure physical activities2. While the most common cognitive sequelae of mTBI appear similar for children and adults, the recovery profile and breadth of consequences in children remains largely unknown2, as does the influence of pre-injury characteristics (e.g. gender) and injury details (e.g. magnitude and direction of impact) on long-term outcomes. Competitive sports, such as hockey, allow the rare opportunity to utilize a pre-post design to obtain pre-injury data before concussion occurs on youth characteristics and functioning and to relate this to outcome following injury. Our primary goals are to refine pediatric concussion diagnosis and management based on research evidence that is specific to children and youth. To do this we use new, multi-modal and integrative approaches that will: 1.Evaluate the immediate effects of head trauma in youth 2.Monitor the resolution of post-concussion symptoms (PCS) and cognitive performance during recovery 3.Utilize new methods to verify brain injury and recovery To achieve our goals, we have implemented the Head Impact Telemetry (HIT) System. (Simbex; Lebanon, NH, USA). This system equips commercially available Easton S9 hockey helmets (Easton-Bell Sports; Van Nuys, CA, USA) with single-axis accelerometers designed to measure real-time head accelerations during contact sport participation 3 - 5. By using telemetric technology, the magnitude of acceleration and location of all head impacts during sport participation can be objectively detected and recorded. We also use functional magnetic resonance imaging (fMRI) to localize and assess changes in neural activity specifically in the medial temporal and frontal lobes during the performance of cognitive tasks, since those are the cerebral regions most sensitive to concussive head injury 6. Finally, we are acquiring structural imaging data sensitive to damage in brain white matter.
Medicine, Issue 47, Mild traumatic brain injury, concussion, fMRI, youth, Head Impact Telemetry System
2226
Play Button
Aseptic Laboratory Techniques: Plating Methods
Authors: Erin R. Sanders.
Institutions: University of California, Los Angeles .
Microorganisms are present on all inanimate surfaces creating ubiquitous sources of possible contamination in the laboratory. Experimental success relies on the ability of a scientist to sterilize work surfaces and equipment as well as prevent contact of sterile instruments and solutions with non-sterile surfaces. Here we present the steps for several plating methods routinely used in the laboratory to isolate, propagate, or enumerate microorganisms such as bacteria and phage. All five methods incorporate aseptic technique, or procedures that maintain the sterility of experimental materials. Procedures described include (1) streak-plating bacterial cultures to isolate single colonies, (2) pour-plating and (3) spread-plating to enumerate viable bacterial colonies, (4) soft agar overlays to isolate phage and enumerate plaques, and (5) replica-plating to transfer cells from one plate to another in an identical spatial pattern. These procedures can be performed at the laboratory bench, provided they involve non-pathogenic strains of microorganisms (Biosafety Level 1, BSL-1). If working with BSL-2 organisms, then these manipulations must take place in a biosafety cabinet. Consult the most current edition of the Biosafety in Microbiological and Biomedical Laboratories (BMBL) as well as Material Safety Data Sheets (MSDS) for Infectious Substances to determine the biohazard classification as well as the safety precautions and containment facilities required for the microorganism in question. Bacterial strains and phage stocks can be obtained from research investigators, companies, and collections maintained by particular organizations such as the American Type Culture Collection (ATCC). It is recommended that non-pathogenic strains be used when learning the various plating methods. By following the procedures described in this protocol, students should be able to: ● Perform plating procedures without contaminating media. ● Isolate single bacterial colonies by the streak-plating method. ● Use pour-plating and spread-plating methods to determine the concentration of bacteria. ● Perform soft agar overlays when working with phage. ● Transfer bacterial cells from one plate to another using the replica-plating procedure. ● Given an experimental task, select the appropriate plating method.
Basic Protocols, Issue 63, Streak plates, pour plates, soft agar overlays, spread plates, replica plates, bacteria, colonies, phage, plaques, dilutions
3064
Play Button
Quantifying Yeast Chronological Life Span by Outgrowth of Aged Cells
Authors: Christopher Murakami, Matt Kaeberlein.
Institutions: University of Washington.
The budding yeast Saccharomyces cerevisiae has proven to be an important model organism in the field of aging research 1. The replicative and chronological life spans are two established paradigms used to study aging in yeast. Replicative aging is defined as the number of daughter cells a single yeast mother cell produces before senescence; chronological aging is defined by the length of time cells can survive in a non-dividing, quiescence-like state 2. We have developed a high-throughput method for quantitative measurement of chronological life span. This method involves aging the cells in a defined medium under agitation and at constant temperature. At each age-point, a sub-population of cells is removed from the aging culture and inoculated into rich growth medium. A high-resolution growth curve is then obtained for this sub-population of aged cells using a Bioscreen C MBR machine. An algorithm is then applied to determine the relative proportion of viable cells in each sub-population based on the growth kinetics at each age-point. This method requires substantially less time and resources compared to other chronological lifespan assays while maintaining reproducibility and precision. The high-throughput nature of this assay should allow for large-scale genetic and chemical screens to identify novel longevity modifiers for further testing in more complex organisms.
Microbiology, Issue 27, longevity, aging, chronological life span, yeast, Bioscreen C MBR, stationary phase
1156
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Trans-vivo Delayed Type Hypersensitivity Assay for Antigen Specific Regulation
Authors: Ewa Jankowska-Gan, Subramanya Hegde, William J. Burlingham.
Institutions: University of Wisconsin-Madison, School of Medicine and Public Health.
Delayed-type hypersensitivity response (DTH) is a rapid in vivo manifestation of T cell-dependent immune response to a foreign antigen (Ag) that the host immune system has experienced in the recent past. DTH reactions are often divided into a sensitization phase, referring to the initial antigen experience, and a challenge phase, which usually follows several days after sensitization. The lack of a delayed-type hypersensitivity response to a recall Ag demonstrated by skin testing is often regarded as an evidence of anergy. The traditional DTH assay has been effectively used in diagnosing many microbial infections. Despite sharing similar immune features such as lymphocyte infiltration, edema, and tissue necrosis, the direct DTH is not a feasible diagnostic technique in transplant patients because of the possibility of direct injection resulting in sensitization to donor antigens and graft loss. To avoid this problem, the human-to-mouse "trans-vivo" DTH assay was developed 1,2. This test is essentially a transfer DTH assay, in which human peripheral blood mononuclear cells (PBMCs) and specific antigens were injected subcutaneously into the pinnae or footpad of a naïve mouse and DTH-like swelling is measured after 18-24 hr 3. The antigen presentation by human antigen presenting cells such as macrophages or DCs to T cells in highly vascular mouse tissue triggers the inflammatory cascade and attracts mouse immune cells resulting in swelling responses. The response is antigen-specific and requires prior antigen sensitization. A positive donor-reactive DTH response in the Tv-DTH assay reflects that the transplant patient has developed a pro-inflammatory immune disposition toward graft alloantigens. The most important feature of this assay is that it can also be used to detect regulatory T cells, which cause bystander suppression. Bystander suppression of a DTH recall response in the presence of donor antigen is characteristic of transplant recipients with accepted allografts 2,4-14. The monitoring of transplant recipients for alloreactivity and regulation by Tv-DTH may identify a subset of patients who could benefit from reduction of immunosuppression without elevated risk of rejection or deteriorating renal function. A promising area is the application of the Tv-DTH assay in monitoring of autoimmunity15,16 and also in tumor immunology 17.
Immunology, Issue 75, Medicine, Molecular Biology, Cellular Biology, Biomedical Engineering, Anatomy, Physiology, Cancer Biology, Surgery, Trans-vivo delayed type hypersensitivity, Tv-DTH, Donor antigen, Antigen-specific regulation, peripheral blood mononuclear cells, PBMC, T regulatory cells, severe combined immunodeficient mice, SCID, T cells, lymphocytes, inflammation, injection, mouse, animal model
4454
Play Button
Detection of Architectural Distortion in Prior Mammograms via Analysis of Oriented Patterns
Authors: Rangaraj M. Rangayyan, Shantanu Banik, J.E. Leo Desautels.
Institutions: University of Calgary , University of Calgary .
We demonstrate methods for the detection of architectural distortion in prior mammograms of interval-cancer cases based on analysis of the orientation of breast tissue patterns in mammograms. We hypothesize that architectural distortion modifies the normal orientation of breast tissue patterns in mammographic images before the formation of masses or tumors. In the initial steps of our methods, the oriented structures in a given mammogram are analyzed using Gabor filters and phase portraits to detect node-like sites of radiating or intersecting tissue patterns. Each detected site is then characterized using the node value, fractal dimension, and a measure of angular dispersion specifically designed to represent spiculating patterns associated with architectural distortion. Our methods were tested with a database of 106 prior mammograms of 56 interval-cancer cases and 52 mammograms of 13 normal cases using the features developed for the characterization of architectural distortion, pattern classification via quadratic discriminant analysis, and validation with the leave-one-patient out procedure. According to the results of free-response receiver operating characteristic analysis, our methods have demonstrated the capability to detect architectural distortion in prior mammograms, taken 15 months (on the average) before clinical diagnosis of breast cancer, with a sensitivity of 80% at about five false positives per patient.
Medicine, Issue 78, Anatomy, Physiology, Cancer Biology, angular spread, architectural distortion, breast cancer, Computer-Assisted Diagnosis, computer-aided diagnosis (CAD), entropy, fractional Brownian motion, fractal dimension, Gabor filters, Image Processing, Medical Informatics, node map, oriented texture, Pattern Recognition, phase portraits, prior mammograms, spectral analysis
50341
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
Quantifying Agonist Activity at G Protein-coupled Receptors
Authors: Frederick J. Ehlert, Hinako Suga, Michael T. Griffin.
Institutions: University of California, Irvine, University of California, Chapman University.
When an agonist activates a population of G protein-coupled receptors (GPCRs), it elicits a signaling pathway that culminates in the response of the cell or tissue. This process can be analyzed at the level of a single receptor, a population of receptors, or a downstream response. Here we describe how to analyze the downstream response to obtain an estimate of the agonist affinity constant for the active state of single receptors. Receptors behave as quantal switches that alternate between active and inactive states (Figure 1). The active state interacts with specific G proteins or other signaling partners. In the absence of ligands, the inactive state predominates. The binding of agonist increases the probability that the receptor will switch into the active state because its affinity constant for the active state (Kb) is much greater than that for the inactive state (Ka). The summation of the random outputs of all of the receptors in the population yields a constant level of receptor activation in time. The reciprocal of the concentration of agonist eliciting half-maximal receptor activation is equivalent to the observed affinity constant (Kobs), and the fraction of agonist-receptor complexes in the active state is defined as efficacy (ε) (Figure 2). Methods for analyzing the downstream responses of GPCRs have been developed that enable the estimation of the Kobs and relative efficacy of an agonist 1,2. In this report, we show how to modify this analysis to estimate the agonist Kb value relative to that of another agonist. For assays that exhibit constitutive activity, we show how to estimate Kb in absolute units of M-1. Our method of analyzing agonist concentration-response curves 3,4 consists of global nonlinear regression using the operational model 5. We describe a procedure using the software application, Prism (GraphPad Software, Inc., San Diego, CA). The analysis yields an estimate of the product of Kobs and a parameter proportional to efficacy (τ). The estimate of τKobs of one agonist, divided by that of another, is a relative measure of Kb (RAi) 6. For any receptor exhibiting constitutive activity, it is possible to estimate a parameter proportional to the efficacy of the free receptor complex (τsys). In this case, the Kb value of an agonist is equivalent to τKobssys 3. Our method is useful for determining the selectivity of an agonist for receptor subtypes and for quantifying agonist-receptor signaling through different G proteins.
Molecular Biology, Issue 58, agonist activity, active state, ligand bias, constitutive activity, G protein-coupled receptor
3179
Play Button
Measuring Diffusion Coefficients via Two-photon Fluorescence Recovery After Photobleaching
Authors: Kelley D. Sullivan, Edward B. Brown.
Institutions: University of Rochester, University of Rochester.
Multi-fluorescence recovery after photobleaching is a microscopy technique used to measure the diffusion coefficient (or analogous transport parameters) of macromolecules, and can be applied to both in vitro and in vivo biological systems. Multi-fluorescence recovery after photobleaching is performed by photobleaching a region of interest within a fluorescent sample using an intense laser flash, then attenuating the beam and monitoring the fluorescence as still-fluorescent molecules from outside the region of interest diffuse in to replace the photobleached molecules. We will begin our demonstration by aligning the laser beam through the Pockels Cell (laser modulator) and along the optical path through the laser scan box and objective lens to the sample. For simplicity, we will use a sample of aqueous fluorescent dye. We will then determine the proper experimental parameters for our sample including, monitor and bleaching powers, bleach duration, bin widths (for photon counting), and fluorescence recovery time. Next, we will describe the procedure for taking recovery curves, a process that can be largely automated via LabVIEW (National Instruments, Austin, TX) for enhanced throughput. Finally, the diffusion coefficient is determined by fitting the recovery data to the appropriate mathematical model using a least-squares fitting algorithm, readily programmable using software such as MATLAB (The Mathworks, Natick, MA).
Cellular Biology, Issue 36, Diffusion, fluorescence recovery after photobleaching, MP-FRAP, FPR, multi-photon
1636
Play Button
Cross-Modal Multivariate Pattern Analysis
Authors: Kaspar Meyer, Jonas T. Kaplan.
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5 or, analogously, the content of speech from activity in early auditory cortices6. Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog? In two previous studies7,8, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA
3307
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.