JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Robust and censored modeling and prediction of progression in glaucomatous visual fields.
Invest. Ophthalmol. Vis. Sci.
PUBLISHED: 09-14-2013
Classic regression is based on certain assumptions that conflict with visual field (VF) data. We investigate and evaluate different regression models and their assumptions in order to determine point-wise VF progression in glaucoma and to better predict future field loss for personalised clinical glaucoma management.
Authors: Ou Tan, Yimin Wang, Ranjith K. Konduru, Xinbo Zhang, SriniVas R. Sadda, David Huang.
Published: 09-18-2012
ABSTRACT
Noncontact retinal blood flow measurements are performed with a Fourier domain optical coherence tomography (OCT) system using a circumpapillary double circular scan (CDCS) that scans around the optic nerve head at 3.40 mm and 3.75 mm diameters. The double concentric circles are performed 6 times consecutively over 2 sec. The CDCS scan is saved with Doppler shift information from which flow can be calculated. The standard clinical protocol calls for 3 CDCS scans made with the OCT beam passing through the superonasal edge of the pupil and 3 CDCS scan through the inferonal pupil. This double-angle protocol ensures that acceptable Doppler angle is obtained on each retinal branch vessel in at least 1 scan. The CDCS scan data, a 3-dimensional volumetric OCT scan of the optic disc scan, and a color photograph of the optic disc are used together to obtain retinal blood flow measurement on an eye. We have developed a blood flow measurement software called "Doppler optical coherence tomography of retinal circulation" (DOCTORC). This semi-automated software is used to measure total retinal blood flow, vessel cross section area, and average blood velocity. The flow of each vessel is calculated from the Doppler shift in the vessel cross-sectional area and the Doppler angle between the vessel and the OCT beam. Total retinal blood flow measurement is summed from the veins around the optic disc. The results obtained at our Doppler OCT reading center showed good reproducibility between graders and methods (<10%). Total retinal blood flow could be useful in the management of glaucoma, other retinal diseases, and retinal diseases. In glaucoma patients, OCT retinal blood flow measurement was highly correlated with visual field loss (R2>0.57 with visual field pattern deviation). Doppler OCT is a new method to perform rapid, noncontact, and repeatable measurement of total retinal blood flow using widely available Fourier-domain OCT instrumentation. This new technology may improve the practicality of making these measurements in clinical studies and routine clinical practice.
25 Related JoVE Articles!
Play Button
A Laser-induced Mouse Model of Chronic Ocular Hypertension to Characterize Visual Defects
Authors: Liang Feng, Hui Chen, Genn Suyeoka, Xiaorong Liu.
Institutions: Northwestern University, Northwestern University.
Glaucoma, frequently associated with elevated intraocular pressure (IOP), is one of the leading causes of blindness. We sought to establish a mouse model of ocular hypertension to mimic human high-tension glaucoma. Here laser illumination is applied to the corneal limbus to photocoagulate the aqueous outflow, inducing angle closure. The changes of IOP are monitored using a rebound tonometer before and after the laser treatment. An optomotor behavioral test is used to measure corresponding changes in visual capacity. The representative result from one mouse which developed sustained IOP elevation after laser illumination is shown. A decreased visual acuity and contrast sensitivity is observed in this ocular hypertensive mouse. Together, our study introduces a valuable model system to investigate neuronal degeneration and the underlying molecular mechanisms in glaucomatous mice.
Medicine, Issue 78, Biomedical Engineering, Neurobiology, Anatomy, Physiology, Neuroscience, Cellular Biology, Molecular Biology, Ophthalmology, Retinal Neurons, Retinal Neurons, Retinal Ganglion Cells, Neurodegenerative Diseases, Ocular Hypertension, Retinal Degeneration, Vision Tests, Visual Acuity, Eye Diseases, Retinal Ganglion Cell (RGC), Ocular Hypertension, Laser Photocoagulation, Intraocular pressure (IOP), Tonometer; Visual Acuity, Contrast Sensitivity, Optomotor, animal model
50440
Play Button
An Optic Nerve Crush Injury Murine Model to Study Retinal Ganglion Cell Survival
Authors: Zhongshu Tang, Shuihua Zhang, Chunsik Lee, Anil Kumar, Pachiappan Arjunan, Yang Li, Fan Zhang, Xuri Li.
Institutions: NIH, The Second Hospital of Harbin Medical University.
Injury to the optic nerve can lead to axonal degeneration, followed by a gradual death of retinal ganglion cells (RGCs), which results in irreversible vision loss. Examples of such diseases in human include traumatic optic neuropathy and optic nerve degeneration in glaucoma. It is characterized by typical changes in the optic nerve head, progressive optic nerve degeneration, and loss of retinal ganglion cells, if uncontrolled, leading to vision loss and blindness. The optic nerve crush (ONC) injury mouse model is an important experimental disease model for traumatic optic neuropathy, glaucoma, etc. In this model, the crush injury to the optic nerve leads to gradual retinal ganglion cells apoptosis. This disease model can be used to study the general processes and mechanisms of neuronal death and survival, which is essential for the development of therapeutic measures. In addition, pharmacological and molecular approaches can be used in this model to identify and test potential therapeutic reagents to treat different types of optic neuropathy. Here, we provide a step by step demonstration of (I) Baseline retrograde labeling of retinal ganglion cells (RGCs) at day 1, (II) Optic nerve crush injury at day 4, (III) Harvest the retinae and analyze RGC survival at day 11, and (IV) Representative result.
Neuroscience, Issue 50, optic nerve crush injury, retinal ganglion cell, glaucoma, optic neuropathy, retrograde labeling
2685
Play Button
Morphometric Analyses of Retinal Sections
Authors: Tin Fung Chan, Kin Chiu, Carmen Ka Ming Lok, Wing Lau Ho, Kwok-Fai So, Raymond Chuen-Chung Chang.
Institutions: The University of Hong Kong, The University of Hong Kong, The University of Hong Kong.
Morphometric analyses of retinal sections have been used in examining retinal diseases. For examples, neuronal cells were significantly lost in the retinal ganglion cell layer (RGCL) in rat models with N-methyl-D-aspartate (NMDA)–induced excitotoxicity1, retinal ischemia-reperfusion injury2 and glaucoma3. Reduction of INL and inner plexiform layer (IPL) thicknesses were reversed with citicoline treatment in rats' eyes subjected to kainic acid-mediated glutamate excitotoxicity4. Alteration of RGC density and soma sizes were observed with different drug treatments in eyes with elevated intraocular pressure3,5,6. Therefore, having objective methods of analyzing the retinal morphometries may be of great significance in evaluating retinal pathologies and the effectiveness of therapeutic strategies. The retinal structure is multi-layers and several different kinds of neurons exist in the retina. The morphometric parameters of retina such as cell number, cell size and thickness of different layers are more complex than the cell culture system. Early on, these parameters can be detected using other commercial imaging software. The values are normally of relative value, and changing to the precise value may need further accurate calculation. Also, the tracing of the cell size and morphology may not be accurate and sensitive enough for statistic analysis, especially in the chronic glaucoma model. The measurements used in this protocol provided a more precise and easy way. And the absolute length of the line and size of the cell can be reported directly and easy to be copied to other files. For example, we traced the margin of the inner and outer most nuclei in the INL and formed a line then using the software to draw a 90 degree angle to measure the thickness. While without the help of the software, the line maybe oblique and the changing of retinal thickness may not be repeatable among individual observers. In addition, the number and density of RGCs can also be quantified. This protocol successfully decreases the variability in quantitating features of the retina, increases the sensitivity in detecting minimal changes. This video will demonstrate three types of morphometric analyses of the retinal sections. They include measuring the INL thickness, quantifying the number of RGCs and measuring the sizes of RGCs in absolute value. These three analyses are carried out with Stereo Investigator (MBF Bioscience — MicroBrightField, Inc.). The technique can offer a simple but scientific platform for morphometric analyses.
Neuroscience, Issue 60, morphometric analysis, retina, thickness, cell size, Stereo Investigator, neuroscience
3377
Play Button
Experimental Metastasis and CTL Adoptive Transfer Immunotherapy Mouse Model
Authors: Mary Zimmerman, Xiaolin Hu, Kebin Liu.
Institutions: Medical College of Georgia.
Experimental metastasis mouse model is a simple and yet physiologically relevant metastasis model. The tumor cells are injected intravenously (i.v) into mouse tail veins and colonize in the lungs, thereby, resembling the last steps of tumor cell spontaneous metastasis: survival in the circulation, extravasation and colonization in the distal organs. From a therapeutic point of view, the experimental metastasis model is the simplest and ideal model since the target of therapies is often the end point of metastasis: established metastatic tumor in the distal organ. In this model, tumor cells are injected i.v into mouse tail veins and allowed to colonize and grow in the lungs. Tumor-specific CTLs are then injected i.v into the metastases-bearing mouse. The number and size of the lung metastases can be controlled by the number of tumor cells to be injected and the time of tumor growth. Therefore, various stages of metastasis, from minimal metastasis to extensive metastasis, can be modeled. Lung metastases are analyzed by inflation with ink, thus allowing easier visual observation and quantification.
Immunology, Issue 45, Metastasis, CTL adoptive transfer, Lung, Tumor Immunology
2077
Play Button
A Protocol for Computer-Based Protein Structure and Function Prediction
Authors: Ambrish Roy, Dong Xu, Jonathan Poisson, Yang Zhang.
Institutions: University of Michigan , University of Kansas.
Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server.
Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction
3259
Play Button
Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules
Authors: James Smadbeck, Meghan B. Peterson, George A. Khoury, Martin S. Taylor, Christodoulos A. Floudas.
Institutions: Princeton University.
The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (https://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods.
Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing
50476
Play Button
A High Throughput MHC II Binding Assay for Quantitative Analysis of Peptide Epitopes
Authors: Regina Salvat, Leonard Moise, Chris Bailey-Kellogg, Karl E. Griswold.
Institutions: Dartmouth College, University of Rhode Island, Dartmouth College.
Biochemical assays with recombinant human MHC II molecules can provide rapid, quantitative insights into immunogenic epitope identification, deletion, or design1,2. Here, a peptide-MHC II binding assay is scaled to 384-well format. The scaled down protocol reduces reagent costs by 75% and is higher throughput than previously described 96-well protocols1,3-5. Specifically, the experimental design permits robust and reproducible analysis of up to 15 peptides against one MHC II allele per 384-well ELISA plate. Using a single liquid handling robot, this method allows one researcher to analyze approximately ninety test peptides in triplicate over a range of eight concentrations and four MHC II allele types in less than 48 hr. Others working in the fields of protein deimmunization or vaccine design and development may find the protocol to be useful in facilitating their own work. In particular, the step-by-step instructions and the visual format of JoVE should allow other users to quickly and easily establish this methodology in their own labs.
Biochemistry, Issue 85, Immunoassay, Protein Immunogenicity, MHC II, T cell epitope, High Throughput Screen, Deimmunization, Vaccine Design
51308
Play Button
In Vivo Modeling of the Morbid Human Genome using Danio rerio
Authors: Adrienne R. Niederriter, Erica E. Davis, Christelle Golzio, Edwin C. Oh, I-Chun Tsai, Nicholas Katsanis.
Institutions: Duke University Medical Center, Duke University, Duke University Medical Center.
Here, we present methods for the development of assays to query potentially clinically significant nonsynonymous changes using in vivo complementation in zebrafish. Zebrafish (Danio rerio) are a useful animal system due to their experimental tractability; embryos are transparent to enable facile viewing, undergo rapid development ex vivo, and can be genetically manipulated.1 These aspects have allowed for significant advances in the analysis of embryogenesis, molecular processes, and morphogenetic signaling. Taken together, the advantages of this vertebrate model make zebrafish highly amenable to modeling the developmental defects in pediatric disease, and in some cases, adult-onset disorders. Because the zebrafish genome is highly conserved with that of humans (~70% orthologous), it is possible to recapitulate human disease states in zebrafish. This is accomplished either through the injection of mutant human mRNA to induce dominant negative or gain of function alleles, or utilization of morpholino (MO) antisense oligonucleotides to suppress genes to mimic loss of function variants. Through complementation of MO-induced phenotypes with capped human mRNA, our approach enables the interpretation of the deleterious effect of mutations on human protein sequence based on the ability of mutant mRNA to rescue a measurable, physiologically relevant phenotype. Modeling of the human disease alleles occurs through microinjection of zebrafish embryos with MO and/or human mRNA at the 1-4 cell stage, and phenotyping up to seven days post fertilization (dpf). This general strategy can be extended to a wide range of disease phenotypes, as demonstrated in the following protocol. We present our established models for morphogenetic signaling, craniofacial, cardiac, vascular integrity, renal function, and skeletal muscle disorder phenotypes, as well as others.
Molecular Biology, Issue 78, Genetics, Biomedical Engineering, Medicine, Developmental Biology, Biochemistry, Anatomy, Physiology, Bioengineering, Genomics, Medical, zebrafish, in vivo, morpholino, human disease modeling, transcription, PCR, mRNA, DNA, Danio rerio, animal model
50338
Play Button
The ITS2 Database
Authors: Benjamin Merget, Christian Koetschan, Thomas Hackl, Frank Förster, Thomas Dandekar, Tobias Müller, Jörg Schultz, Matthias Wolf.
Institutions: University of Würzburg, University of Würzburg.
The internal transcribed spacer 2 (ITS2) has been used as a phylogenetic marker for more than two decades. As ITS2 research mainly focused on the very variable ITS2 sequence, it confined this marker to low-level phylogenetics only. However, the combination of the ITS2 sequence and its highly conserved secondary structure improves the phylogenetic resolution1 and allows phylogenetic inference at multiple taxonomic ranks, including species delimitation2-8. The ITS2 Database9 presents an exhaustive dataset of internal transcribed spacer 2 sequences from NCBI GenBank11 accurately reannotated10. Following an annotation by profile Hidden Markov Models (HMMs), the secondary structure of each sequence is predicted. First, it is tested whether a minimum energy based fold12 (direct fold) results in a correct, four helix conformation. If this is not the case, the structure is predicted by homology modeling13. In homology modeling, an already known secondary structure is transferred to another ITS2 sequence, whose secondary structure was not able to fold correctly in a direct fold. The ITS2 Database is not only a database for storage and retrieval of ITS2 sequence-structures. It also provides several tools to process your own ITS2 sequences, including annotation, structural prediction, motif detection and BLAST14 search on the combined sequence-structure information. Moreover, it integrates trimmed versions of 4SALE15,16 and ProfDistS17 for multiple sequence-structure alignment calculation and Neighbor Joining18 tree reconstruction. Together they form a coherent analysis pipeline from an initial set of sequences to a phylogeny based on sequence and secondary structure. In a nutshell, this workbench simplifies first phylogenetic analyses to only a few mouse-clicks, while additionally providing tools and data for comprehensive large-scale analyses.
Genetics, Issue 61, alignment, internal transcribed spacer 2, molecular systematics, secondary structure, ribosomal RNA, phylogenetic tree, homology modeling, phylogeny
3806
Play Button
RNA Secondary Structure Prediction Using High-throughput SHAPE
Authors: Sabrina Lusvarghi, Joanna Sztuba-Solinska, Katarzyna J. Purzycka, Jason W. Rausch, Stuart F.J. Le Grice.
Institutions: Frederick National Laboratory for Cancer Research.
Understanding the function of RNA involved in biological processes requires a thorough knowledge of RNA structure. Toward this end, the methodology dubbed "high-throughput selective 2' hydroxyl acylation analyzed by primer extension", or SHAPE, allows prediction of RNA secondary structure with single nucleotide resolution. This approach utilizes chemical probing agents that preferentially acylate single stranded or flexible regions of RNA in aqueous solution. Sites of chemical modification are detected by reverse transcription of the modified RNA, and the products of this reaction are fractionated by automated capillary electrophoresis (CE). Since reverse transcriptase pauses at those RNA nucleotides modified by the SHAPE reagents, the resulting cDNA library indirectly maps those ribonucleotides that are single stranded in the context of the folded RNA. Using ShapeFinder software, the electropherograms produced by automated CE are processed and converted into nucleotide reactivity tables that are themselves converted into pseudo-energy constraints used in the RNAStructure (v5.3) prediction algorithm. The two-dimensional RNA structures obtained by combining SHAPE probing with in silico RNA secondary structure prediction have been found to be far more accurate than structures obtained using either method alone.
Genetics, Issue 75, Molecular Biology, Biochemistry, Virology, Cancer Biology, Medicine, Genomics, Nucleic Acid Probes, RNA Probes, RNA, High-throughput SHAPE, Capillary electrophoresis, RNA structure, RNA probing, RNA folding, secondary structure, DNA, nucleic acids, electropherogram, synthesis, transcription, high throughput, sequencing
50243
Play Button
Tumor Treating Field Therapy in Combination with Bevacizumab for the Treatment of Recurrent Glioblastoma
Authors: Ayman I. Omar.
Institutions: Southern Illinois University School of Medicine.
A novel device that employs TTF therapy has recently been developed and is currently in use for the treatment of recurrent glioblastoma (rGBM). It was FDA approved in April 2011 for the treatment of patients 22 years or older with rGBM. The device delivers alternating electric fields and is programmed to ensure maximal tumor cell kill1. Glioblastoma is the most common type of glioma and has an estimated incidence of approximately 10,000 new cases per year in the United States alone2. This tumor is particularly resistant to treatment and is uniformly fatal especially in the recurrent setting3-5. Prior to the approval of the TTF System, the only FDA approved treatment for rGBM was bevacizumab6. Bevacizumab is a humanized monoclonal antibody targeted against the vascular endothelial growth factor (VEGF) protein that drives tumor angiogenesis7. By blocking the VEGF pathway, bevacizumab can result in a significant radiographic response (pseudoresponse), improve progression free survival and reduce corticosteroid requirements in rGBM patients8,9. Bevacizumab however failed to prolong overall survival in a recent phase III trial26. A pivotal phase III trial (EF-11) demonstrated comparable overall survival between physicians’ choice chemotherapy and TTF Therapy but better quality of life were observed in the TTF arm10. There is currently an unmet need to develop novel approaches designed to prolong overall survival and/or improve quality of life in this unfortunate patient population. One appealing approach would be to combine the two currently approved treatment modalities namely bevacizumab and TTF Therapy. These two treatments are currently approved as monotherapy11,12, but their combination has never been evaluated in a clinical trial. We have developed an approach for combining those two treatment modalities and treated 2 rGBM patients. Here we describe a detailed methodology outlining this novel treatment protocol and present representative data from one of the treated patients.
Medicine, Issue 92, Tumor Treating Fields, TTF System, TTF Therapy, Recurrent Glioblastoma, Bevacizumab, Brain Tumor
51638
Play Button
Intravitreous Injection for Establishing Ocular Diseases Model
Authors: Kin Chiu, Raymond Chuen-Chung Chang, Kwok-Fai So.
Institutions: The University of Hong Kong - HKU.
Intravitreous injection is a widely used technique in visual sciences research. It can be used to establish animal models with ocular diseases or as direct application of local treatment. This video introduces how to use simple and inexpensive tools to finish the intravitreous injection procedure. Use of a 1 ml syringe, instead of a hemilton syringe, is used. Practical tips for how to make appropriate injection needles using glass pipettes with perfect tips, and how to easily connect the syringe needle with the glass pipette tightly together, are given. To conduct a good intravitreous injection, there are three aspects to be observed: 1) injection site should not disrupt retina structure; 2) bleeding should be avoided to reduce the risk of infection; 3) lens should be untouched to avoid traumatic cataract. In brief, the most important point is to reduce the interruption of normal ocular structure. To avoid interruption of retina, the superior nasal region of rat eye was chosen. Also, the puncture point of the needle was at the par planar, which was about 1.5 mm from the limbal region of the rat eye. A small amount of vitreous is gently pushed out through the puncture hole to reduce the intraocular pressure before injection. With the 45° injection angle, it is less likely to cause traumatic cataract in the rat eye, thus avoiding related complications and influence from lenticular factors. In this operation, there was no cutting of the conjunctiva and ocular muscle, no bleeding. With quick and minor injury, a successful intravitreous injection can be done in minutes. The injection set outlined in this particular protocol is specific for intravitreous injection. However, the methods and materials presented here can also be used for other injection procedures in drug delivery to the brain, spinal cord or other organs in small mammals.
Neuroscience, Issue 8, eye, injection, rat
313
Play Button
Measurement of Greenhouse Gas Flux from Agricultural Soils Using Static Chambers
Authors: Sarah M. Collier, Matthew D. Ruark, Lawrence G. Oates, William E. Jokela, Curtis J. Dell.
Institutions: University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, University of Wisconsin-Madison, USDA-ARS Dairy Forage Research Center, USDA-ARS Pasture Systems Watershed Management Research Unit.
Measurement of greenhouse gas (GHG) fluxes between the soil and the atmosphere, in both managed and unmanaged ecosystems, is critical to understanding the biogeochemical drivers of climate change and to the development and evaluation of GHG mitigation strategies based on modulation of landscape management practices. The static chamber-based method described here is based on trapping gases emitted from the soil surface within a chamber and collecting samples from the chamber headspace at regular intervals for analysis by gas chromatography. Change in gas concentration over time is used to calculate flux. This method can be utilized to measure landscape-based flux of carbon dioxide, nitrous oxide, and methane, and to estimate differences between treatments or explore system dynamics over seasons or years. Infrastructure requirements are modest, but a comprehensive experimental design is essential. This method is easily deployed in the field, conforms to established guidelines, and produces data suitable to large-scale GHG emissions studies.
Environmental Sciences, Issue 90, greenhouse gas, trace gas, gas flux, static chamber, soil, field, agriculture, climate
52110
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Assessing Cerebral Autoregulation via Oscillatory Lower Body Negative Pressure and Projection Pursuit Regression
Authors: J. Andrew Taylor, Can Ozan Tan, J. W. Hamner.
Institutions: Harvard Medical School, Spaulding Hospital Cambridge.
The process by which cerebral perfusion is maintained constant over a wide range of systemic pressures is known as “cerebral autoregulation.” Effective dampening of flow against pressure changes occurs over periods as short as ~15 sec and becomes progressively greater over longer time periods. Thus, slower changes in blood pressure are effectively blunted and faster changes or fluctuations pass through to cerebral blood flow relatively unaffected. The primary difficulty in characterizing the frequency dependence of cerebral autoregulation is the lack of prominent spontaneous fluctuations in arterial pressure around the frequencies of interest (less than ~0.07 Hz or ~15 sec). Oscillatory lower body negative pressure (OLBNP) can be employed to generate oscillations in central venous return that result in arterial pressure fluctuations at the frequency of OLBNP. Moreover, Projection Pursuit Regression (PPR) provides a nonparametric method to characterize nonlinear relations inherent in the system without a priori assumptions and reveals the characteristic non-linearity of cerebral autoregulation. OLBNP generates larger fluctuations in arterial pressure as the frequency of negative pressure oscillations become slower; however, fluctuations in cerebral blood flow become progressively lesser. Hence, the PPR shows an increasingly more prominent autoregulatory region at OLBNP frequencies of 0.05 Hz and below (20 sec cycles). The goal of this approach it to allow laboratory-based determination of the characteristic nonlinear relationship between pressure and cerebral flow and could provide unique insight to integrated cerebrovascular control as well as to physiological alterations underlying impaired cerebral autoregulation (e.g., after traumatic brain injury, stroke, etc.).
Medicine, Issue 94, cerebral blood flow, lower body negative pressure, autoregulation, sympathetic nervous system
51082
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
50780
Play Button
Setting Limits on Supersymmetry Using Simplified Models
Authors: Christian Gütschow, Zachary Marshall.
Institutions: University College London, CERN, Lawrence Berkeley National Laboratories.
Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical interpretations. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be recast in this manner into almost any theoretical framework, including nonsupersymmetric theories with supersymmetry-like signatures.
Physics, Issue 81, high energy physics, particle physics, Supersymmetry, LHC, ATLAS, CMS, New Physics Limits, Simplified Models
50419
Play Button
Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study
Authors: Johannes Felix Buyel, Rainer Fischer.
Institutions: RWTH Aachen University, Fraunhofer Gesellschaft.
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody
51216
Play Button
Laboratory-determined Phosphorus Flux from Lake Sediments as a Measure of Internal Phosphorus Loading
Authors: Mary E. Ogdahl, Alan D. Steinman, Maggie E. Weinert.
Institutions: Grand Valley State University.
Eutrophication is a water quality issue in lakes worldwide, and there is a critical need to identify and control nutrient sources. Internal phosphorus (P) loading from lake sediments can account for a substantial portion of the total P load in eutrophic, and some mesotrophic, lakes. Laboratory determination of P release rates from sediment cores is one approach for determining the role of internal P loading and guiding management decisions. Two principal alternatives to experimental determination of sediment P release exist for estimating internal load: in situ measurements of changes in hypolimnetic P over time and P mass balance. The experimental approach using laboratory-based sediment incubations to quantify internal P load is a direct method, making it a valuable tool for lake management and restoration. Laboratory incubations of sediment cores can help determine the relative importance of internal vs. external P loads, as well as be used to answer a variety of lake management and research questions. We illustrate the use of sediment core incubations to assess the effectiveness of an aluminum sulfate (alum) treatment for reducing sediment P release. Other research questions that can be investigated using this approach include the effects of sediment resuspension and bioturbation on P release. The approach also has limitations. Assumptions must be made with respect to: extrapolating results from sediment cores to the entire lake; deciding over what time periods to measure nutrient release; and addressing possible core tube artifacts. A comprehensive dissolved oxygen monitoring strategy to assess temporal and spatial redox status in the lake provides greater confidence in annual P loads estimated from sediment core incubations.
Environmental Sciences, Issue 85, Limnology, internal loading, eutrophication, nutrient flux, sediment coring, phosphorus, lakes
51617
Play Button
Laser-Induced Chronic Ocular Hypertension Model on SD Rats
Authors: Kin Chiu, Raymond Chang, Kwok-Fai So.
Institutions: The University of Hong Kong - HKU.
Glaucoma is one of the major causes of blindness in the world. Elevated intraocular pressure is a major risk factor. Laser photocoagulation induced ocular hypertension is one of the well established animal models. This video demonstrates how to induce ocular hypertension by Argon laser photocoagulation in rat.
Neuroscience, Issue 10, glaucoma, ocular hypertension, rat
549
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Retrograde Labeling of Retinal Ganglion Cells by Application of Fluoro-Gold on the Surface of Superior Colliculus
Authors: Kin Chiu, Wui-Man Lau, Sze-chun Yeung, Raymond Chuen-Chung Chang, Kwok-Fai So.
Institutions: The University of Hong Kong - HKU.
Retinal ganglion cell (RGC) counting is essential to evaluate retinal degeneration especially in glaucoma. Reliable RGC labeling is fundamental for evaluating the effects of any treatment. In rat, about 98% of RGCs is known to project to the contralateral superior colliculus (SC) (Forrester and Peters, 1967). Applying fluoro-gold (FG) on the surface of SC can label almost all the RGCs, so that we can focus on this most vulnerable retinal neuron in glaucoma. FG is taken up by the axon terminals of retinal ganglion cells and bilaterally transported retrogradely to its somas in the retina. Compare with retrograde labeling of RGC by putting FG at stump of transected optic nerve for 2 days, the interference of RGC survival is minimized. Compare with cresyl violet staining that stains RGCs, amacrine cells and endothelium of the blood vessel in the retinal ganglion cell layer, this labeling method is more specific to the RGC. This video describes the method of retrograde labeling of RGC by applying FG on the surface of SC. The surgical procedures include drilling the skull; aspirating the cortex to expose the SC and applying gelatin sponge over entire dorsal surface of SC are shown. Useful tips for avoiding massive intracranial bleeding and aspiration of the SC have been given.
Neuroscience, Issue 16, Retrograde labeling, retinal ganglion cells, ophthalmology research, superior colliculus, experimental glaucoma
819
Play Button
Methods for ECG Evaluation of Indicators of Cardiac Risk, and Susceptibility to Aconitine-induced Arrhythmias in Rats Following Status Epilepticus
Authors: Steven L. Bealer, Cameron S. Metcalf, Jason G. Little.
Institutions: University of Utah.
Lethal cardiac arrhythmias contribute to mortality in a number of pathological conditions. Several parameters obtained from a non-invasive, easily obtained electrocardiogram (ECG) are established, well-validated prognostic indicators of cardiac risk in patients suffering from a number of cardiomyopathies. Increased heart rate, decreased heart rate variability (HRV), and increased duration and variability of cardiac ventricular electrical activity (QT interval) are all indicative of enhanced cardiac risk 1-4. In animal models, it is valuable to compare these ECG-derived variables and susceptibility to experimentally induced arrhythmias. Intravenous infusion of the arrhythmogenic agent aconitine has been widely used to evaluate susceptibility to arrhythmias in a range of experimental conditions, including animal models of depression 5 and hypertension 6, following exercise 7 and exposure to air pollutants 8, as well as determination of the antiarrhythmic efficacy of pharmacological agents 9,10. It should be noted that QT dispersion in humans is a measure of QT interval variation across the full set of leads from a standard 12-lead ECG. Consequently, the measure of QT dispersion from the 2-lead ECG in the rat described in this protocol is different than that calculated from human ECG records. This represents a limitation in the translation of the data obtained from rodents to human clinical medicine. Status epilepticus (SE) is a single seizure or series of continuously recurring seizures lasting more than 30 min 11,12 11,12, and results in mortality in 20% of cases 13. Many individuals survive the SE, but die within 30 days 14,15. The mechanism(s) of this delayed mortality is not fully understood. It has been suggested that lethal ventricular arrhythmias contribute to many of these deaths 14-17. In addition to SE, patients experiencing spontaneously recurring seizures, i.e. epilepsy, are at risk of premature sudden and unexpected death associated with epilepsy (SUDEP) 18. As with SE, the precise mechanisms mediating SUDEP are not known. It has been proposed that ventricular abnormalities and resulting arrhythmias make a significant contribution 18-22. To investigate the mechanisms of seizure-related cardiac death, and the efficacy of cardioprotective therapies, it is necessary to obtain both ECG-derived indicators of risk and evaluate susceptibility to cardiac arrhythmias in animal models of seizure disorders 23-25. Here we describe methods for implanting ECG electrodes in the Sprague-Dawley laboratory rat (Rattus norvegicus), following SE, collection and analysis of ECG recordings, and induction of arrhythmias during iv infusion of aconitine. These procedures can be used to directly determine the relationships between ECG-derived measures of cardiac electrical activity and susceptibility to ventricular arrhythmias in rat models of seizure disorders, or any pathology associated with increased risk of sudden cardiac death.
Medicine, Issue 50, cardiac, seizure disorders, QTc, QTd, cardiac arrhythmias, rat
2726
Play Button
Multifocal Electroretinograms
Authors: Donnell J. Creel.
Institutions: University of Utah.
A limitation of traditional full-field electroretinograms (ERG) for the diagnosis of retinopathy is lack of sensitivity. Generally, ERG results are normal unless more than approximately 20% of the retina is affected. In practical terms, a patient might be legally blind as a result of macular degeneration or other scotomas and still appear normal, according to traditional full field ERG. An important development in ERGs is the multifocal ERG (mfERG). Erich Sutter adapted the mathematical sequences called binary m-sequences enabling the isolation from a single electrical signal an electroretinogram representing less than each square millimeter of retina in response to a visual stimulus1. Results that are generated by mfERG appear similar to those generated by flash ERG. In contrast to flash ERG, which best generates data appropriate for whole-eye disorders. The basic mfERG result is based on the calculated mathematical average of an approximation of the positive deflection component of traditional ERG response, known as the b-wave1. Multifocal ERG programs measure electrical activity from more than a hundred retinal areas per eye, in a few minutes. The enhanced spatial resolution enables scotomas and retinal dysfunction to be mapped and quantified. In the protocol below, we describe the recording of mfERGs using a bipolar speculum contact lens. Components of mfERG systems vary between manufacturers. For the presentation of visible stimulus, some suitable CRT monitors are available but most systems have adopted the use of flat-panel liquid crystal displays (LCD). The visual stimuli depicted here, were produced by a LCD microdisplay subtending 35 - 40 degrees horizontally and 30 - 35 degrees vertically of visual field, and calibrated to produce multifocal flash intensities of 2.7 cd s m-2. Amplification was 50K. Lower and upper bandpass limits were 10 and 300 Hz. The software packages used were VERIS versions 5 and 6.
Medicine, Issue 58, Multifocal electroretinogram, mfERG, electroretinogram, ERG
3176
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.