JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
The Sound of Voice: Voice-Based Categorization of Speakers' Sexual Orientation within and across Languages.
.
PLoS ONE
PUBLISHED: 07-03-2015
Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.
Authors: Manon Robillard, Chantal Mayer-Crittenden, Annie Roy-Charland, Michèle Minor-Corriveau, Roxanne Bélanger.
Published: 06-01-2015
ABSTRACT
This paper describes an approach for measuring navigation accuracy relative to cognitive skills. The methodology behind the assessment will thus be clearly outlined in a step-by-step manner. Navigational skills are important when trying to find symbols within a speech-generating device (SGD) that has a dynamic screen and taxonomical organization. The following skills have been found to impact children’s ability to find symbols when navigating within the levels of an SGD: sustained attention, categorization, cognitive flexibility, and fluid reasoning1,2. According to past studies, working memory was not correlated with navigation1,2. The materials needed for this method include a computerized tablet, an augmentative and alternative communication application, a booklet of symbols, and the Leiter International Performance Scale-Revised (Leiter-R)3. This method has been used in two previous studies. Robillard, Mayer-Crittenden, Roy-Charland, Minor-Corriveau and Bélanger1 assessed typically developing children, while Rondeau, Robillard and Roy-Charland2 assessed children and adolescents with a diagnosis of Autism Spectrum Disorder. The direct observation of this method will facilitate the replication of this study for researchers. It will also help clinicians that work with children who have complex communication needs to determine the children’s ability to navigate an SGD with taxonomical categorization.
18 Related JoVE Articles!
Play Button
A Lightweight, Headphones-based System for Manipulating Auditory Feedback in Songbirds
Authors: Lukas A. Hoffmann, Conor W. Kelly, David A. Nicholson, Samuel J. Sober.
Institutions: Emory University, Emory University, Emory University.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output. Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13. The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.
Neuroscience, Issue 69, Anatomy, Physiology, Zoology, Behavior, Songbird, psychophysics, auditory feedback, biology, sensorimotor learning
50027
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
51735
Play Button
Flying Insect Detection and Classification with Inexpensive Sensors
Authors: Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn Keogh.
Institutions: University of California, Riverside, University of California, Riverside, University of São Paulo - USP, ISCA Technologies.
An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.
Bioengineering, Issue 92, flying insect detection, automatic insect classification, pseudo-acoustic optical sensors, Bayesian classification framework, flight sound, circadian rhythm
52111
Play Button
A Tactile Automated Passive-Finger Stimulator (TAPS)
Authors: Daniel Goldreich, Michael Wong, Ryan M. Peters, Ingrid M. Kanics.
Institutions: Duquesne University, McMaster University.
Although tactile spatial acuity tests are used in both neuroscience research and clinical assessment, few automated devices exist for delivering controlled spatially structured stimuli to the skin. Consequently, investigators often apply tactile stimuli manually. Manual stimulus application is time consuming, requires great care and concentration on the part of the investigator, and leaves many stimulus parameters uncontrolled. We describe here a computer-controlled tactile stimulus system, the Tactile Automated Passive-finger Stimulator (TAPS), that applies spatially structured stimuli to the skin, controlling for onset velocity, contact force, and contact duration. TAPS is a versatile, programmable system, capable of efficiently conducting a variety of psychophysical procedures. We describe the components of TAPS, and show how TAPS is used to administer a two-interval forced-choice tactile grating orientation test. Corresponding Author: Daniel Goldreich
Medicine, Neuroscience, Issue 28, tactile, somatosensory, touch, cutaneous, acuity, psychophysics, Bayesian, grating orientation, sensory neuroscience, spatial discrimination
1374
Play Button
Modulating Cognition Using Transcranial Direct Current Stimulation of the Cerebellum
Authors: Paul A. Pope.
Institutions: University of Birmingham.
Numerous studies have emerged recently that demonstrate the possibility of modulating, and in some cases enhancing, cognitive processes by exciting brain regions involved in working memory and attention using transcranial electrical brain stimulation. Some researchers now believe the cerebellum supports cognition, possibly via a remote neuromodulatory effect on the prefrontal cortex. This paper describes a procedure for investigating a role for the cerebellum in cognition using transcranial direct current stimulation (tDCS), and a selection of information-processing tasks of varying task difficulty, which have previously been shown to involve working memory, attention and cerebellar functioning. One task is called the Paced Auditory Serial Addition Task (PASAT) and the other a novel variant of this task called the Paced Auditory Serial Subtraction Task (PASST). A verb generation task and its two controls (noun and verb reading) were also investigated. All five tasks were performed by three separate groups of participants, before and after the modulation of cortico-cerebellar connectivity using anodal, cathodal or sham tDCS over the right cerebellar cortex. The procedure demonstrates how performance (accuracy, verbal response latency and variability) could be selectively improved after cathodal stimulation, but only during tasks that the participants rated as difficult, and not easy. Performance was unchanged by anodal or sham stimulation. These findings demonstrate a role for the cerebellum in cognition, whereby activity in the left prefrontal cortex is likely dis-inhibited by cathodal tDCS over the right cerebellar cortex. Transcranial brain stimulation is growing in popularity in various labs and clinics. However, the after-effects of tDCS are inconsistent between individuals and not always polarity-specific, and may even be task- or load-specific, all of which requires further study. Future efforts might also be guided towards neuro-enhancement in cerebellar patients presenting with cognitive impairment once a better understanding of brain stimulation mechanisms has emerged.
Behavior, Issue 96, Cognition, working memory, tDCS, cerebellum, brain stimulation, neuro-modulation, neuro-enhancement
52302
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
52374
Play Button
Infant Auditory Processing and Event-related Brain Oscillations
Authors: Gabriella Musacchia, Silvia Ortiz-Mantilla, Teresa Realpe-Bonilla, Cynthia P. Roesler, April A. Benasich.
Institutions: Rutgers University, State University of New Jersey, Newark, University of the Pacific, Stanford University.
Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net application, recording, dynamic brain response analysis, and representative results.
Behavior, Issue 101, Infant, Infant Brain, Human Development, Auditory Development, Oscillations, Brain Oscillations, Theta, Electroencephalogram, Child Development, Event-related Potentials, Source Localization, Auditory Cortex
52420
Play Button
Dyeing Insects for Behavioral Assays: the Mating Behavior of Anesthetized Drosophila
Authors: Rudi L. Verspoor, Chloe Heys, Thomas A. R. Price.
Institutions: University of Liverpool.
Mating experiments using Drosophila have contributed greatly to the understanding of sexual selection and behavior. Experiments often require simple, easy and cheap methods to distinguish between individuals in a trial. A standard technique for this is CO2 anaesthesia and then labelling or wing clipping each fly. However, this is invasive and has been shown to affect behavior. Other techniques have used coloration to identify flies. This article presents a simple and non-invasive method for labelling Drosophila that allows them to be individually identified within experiments, using food coloring. This method is used in trials where two males compete to mate with a female. Dyeing allowed quick and easy identification. There was, however, some difference in the strength of the coloration across the three species tested. Data is presented showing the dye has a lower impact on mating behavior than CO2 in Drosophila melanogaster. The impact of CO2 anaesthesia is shown to depend on the species of Drosophila, with D. pseudoobscura and D. subobscura showing no impact, whereas D. melanogaster males had reduced mating success. The dye method presented is applicable to a wide range of experimental designs.
Neuroscience, Issue 98, Anesthesia, courtship, fruit fly, individual marking, individual tagging, male-male competition, mate choice, mate competition, mating latency, wing clipping
52645
Play Button
Construction and Characterization of a Novel Vocal Fold Bioreactor
Authors: Aidan B. Zerdoum, Zhixiang Tong, Brendan Bachman, Xinqiao Jia.
Institutions: University of Delaware, University of Delaware.
In vitro engineering of mechanically active tissues requires the presentation of physiologically relevant mechanical conditions to cultured cells. To emulate the dynamic environment of vocal folds, a novel vocal fold bioreactor capable of producing vibratory stimulations at fundamental phonation frequencies is constructed and characterized. The device is composed of a function generator, a power amplifier, a speaker selector and parallel vibration chambers. Individual vibration chambers are created by sandwiching a custom-made silicone membrane between a pair of acrylic blocks. The silicone membrane not only serves as the bottom of the chamber but also provides a mechanism for securing the cell-laden scaffold. Vibration signals, generated by a speaker mounted underneath the bottom acrylic block, are transmitted to the membrane aerodynamically by the oscillating air. Eight identical vibration modules, fixed on two stationary metal bars, are housed in an anti-humidity chamber for long-term operation in a cell culture incubator. The vibration characteristics of the vocal fold bioreactor are analyzed non-destructively using a Laser Doppler Vibrometer (LDV). The utility of the dynamic culture device is demonstrated by culturing cellular constructs in the presence of 200-Hz sinusoidal vibrations with a mid-membrane displacement of 40 µm. Mesenchymal stem cells cultured in the bioreactor respond to the vibratory signals by altering the synthesis and degradation of vocal fold-relevant, extracellular matrix components. The novel bioreactor system presented herein offers an excellent in vitro platform for studying vibration-induced mechanotransduction and for the engineering of functional vocal fold tissues.
Bioengineering, Issue 90, vocal fold; bioreactor; speaker; silicone membrane; fibrous scaffold; mesenchymal stem cells; vibration; extracellular matrix
51594
Play Button
A Novel Stretching Platform for Applications in Cell and Tissue Mechanobiology
Authors: Dominique Tremblay, Charles M. Cuerrier, Lukasz Andrzejewski, Edward R. O'Brien, Andrew E. Pelling.
Institutions: University of Ottawa, University of Ottawa, University of Calgary, University of Ottawa, University of Ottawa.
Tools that allow the application of mechanical forces to cells and tissues or that can quantify the mechanical properties of biological tissues have contributed dramatically to the understanding of basic mechanobiology. These techniques have been extensively used to demonstrate how the onset and progression of various diseases are heavily influenced by mechanical cues. This article presents a multi-functional biaxial stretching (BAXS) platform that can either mechanically stimulate single cells or quantify the mechanical stiffness of tissues. The BAXS platform consists of four voice coil motors that can be controlled independently. Single cells can be cultured on a flexible substrate that can be attached to the motors allowing one to expose the cells to complex, dynamic, and spatially varying strain fields. Conversely, by incorporating a force load cell, one can also quantify the mechanical properties of primary tissues as they are exposed to deformation cycles. In both cases, a proper set of clamps must be designed and mounted to the BAXS platform motors in order to firmly hold the flexible substrate or the tissue of interest. The BAXS platform can be mounted on an inverted microscope to perform simultaneous transmitted light and/or fluorescence imaging to examine the structural or biochemical response of the sample during stretching experiments. This article provides experimental details of the design and usage of the BAXS platform and presents results for single cell and whole tissue studies. The BAXS platform was used to measure the deformation of nuclei in single mouse myoblast cells in response to substrate strain and to measure the stiffness of isolated mouse aortas. The BAXS platform is a versatile tool that can be combined with various optical microscopies in order to provide novel mechanobiological insights at the sub-cellular, cellular and whole tissue levels.
Bioengineering, Issue 88, cell stretching, tissue mechanics, nuclear mechanics, uniaxial, biaxial, anisotropic, mechanobiology
51454
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
4331
Play Button
Synthetic, Multi-Layer, Self-Oscillating Vocal Fold Model Fabrication
Authors: Preston R. Murray, Scott L. Thomson.
Institutions: Brigham Young University.
Sound for the human voice is produced via flow-induced vocal fold vibration. The vocal folds consist of several layers of tissue, each with differing material properties 1. Normal voice production relies on healthy tissue and vocal folds, and occurs as a result of complex coupling between aerodynamic, structural dynamic, and acoustic physical phenomena. Voice disorders affect up to 7.5 million annually in the United States alone 2 and often result in significant financial, social, and other quality-of-life difficulties. Understanding the physics of voice production has the potential to significantly benefit voice care, including clinical prevention, diagnosis, and treatment of voice disorders. Existing methods for studying voice production include in vivo experimentation using human and animal subjects, in vitro experimentation using excised larynges and synthetic models, and computational modeling. Owing to hazardous and difficult instrument access, in vivo experiments are severely limited in scope. Excised larynx experiments have the benefit of anatomical and some physiological realism, but parametric studies involving geometric and material property variables are limited. Further, they are typically only able to be vibrated for relatively short periods of time (typically on the order of minutes). Overcoming some of the limitations of excised larynx experiments, synthetic vocal fold models are emerging as a complementary tool for studying voice production. Synthetic models can be fabricated with systematic changes to geometry and material properties, allowing for the study of healthy and unhealthy human phonatory aerodynamics, structural dynamics, and acoustics. For example, they have been used to study left-right vocal fold asymmetry 3,4, clinical instrument development 5, laryngeal aerodynamics 6-9, vocal fold contact pressure 10, and subglottal acoustics 11 (a more comprehensive list can be found in Kniesburges et al. 12) Existing synthetic vocal fold models, however, have either been homogenous (one-layer models) or have been fabricated using two materials of differing stiffness (two-layer models). This approach does not allow for representation of the actual multi-layer structure of the human vocal folds 1 that plays a central role in governing vocal fold flow-induced vibratory response. Consequently, one- and two-layer synthetic vocal fold models have exhibited disadvantages 3,6,8 such as higher onset pressures than what are typical for human phonation (onset pressure is the minimum lung pressure required to initiate vibration), unnaturally large inferior-superior motion, and lack of a "mucosal wave" (a vertically-traveling wave that is characteristic of healthy human vocal fold vibration). In this paper, fabrication of a model with multiple layers of differing material properties is described. The model layers simulate the multi-layer structure of the human vocal folds, including epithelium, superficial lamina propria (SLP), intermediate and deep lamina propria (i.e., ligament; a fiber is included for anterior-posterior stiffness), and muscle (i.e., body) layers 1. Results are included that show that the model exhibits improved vibratory characteristics over prior one- and two-layer synthetic models, including onset pressure closer to human onset pressure, reduced inferior-superior motion, and evidence of a mucosal wave.
Bioengineering, Issue 58, Vocal folds, larynx, voice, speech, artificial biomechanical models
3498
Play Button
Targeted Training of Ultrasonic Vocalizations in Aged and Parkinsonian Rats
Authors: Aaron M. Johnson, Emerald J. Doll, Laura M. Grant, Lauren Ringel, Jaime N. Shier, Michelle R. Ciucci.
Institutions: University of Wisconsin, University of Wisconsin.
Voice deficits are a common complication of both Parkinson disease (PD) and aging; they can significantly diminish quality of life by impacting communication abilities. 1, 2 Targeted training (speech/voice therapy) can improve specific voice deficits,3, 4 although the underlying mechanisms of behavioral interventions are not well understood. Systematic investigation of voice deficits and therapy should consider many factors that are difficult to control in humans, such as age, home environment, age post-onset of disease, severity of disease, and medications. The method presented here uses an animal model of vocalization that allows for systematic study of how underlying sensorimotor mechanisms change with targeted voice training. The ultrasonic recording and analysis procedures outlined in this protocol are applicable to any investigation of rodent ultrasonic vocalizations. The ultrasonic vocalizations of rodents are emerging as a valuable model to investigate the neural substrates of behavior.5-8 Both rodent and human vocalizations carry semiotic value and are produced by modifying an egressive airflow with a laryngeal constriction.9, 10 Thus, rodent vocalizations may be a useful model to study voice deficits in a sensorimotor context. Further, rat models allow us to study the neurobiological underpinnings of recovery from deficits with targeted training. To model PD we use Long-Evans rats (Charles River Laboratories International, Inc.) and induce parkinsonism by a unilateral infusion of 7 μg of 6-hydroxydopamine (6-OHDA) into the medial forebrain bundle which causes moderate to severe degeneration of presynaptic striatal neurons (for details see Ciucci, 2010).11, 12 For our aging model we use the Fischer 344/Brown Norway F1 (National Institute on Aging). Our primary method for eliciting vocalizations is to expose sexually-experienced male rats to sexually receptive female rats. When the male becomes interested in the female, the female is removed and the male continues to vocalize. By rewarding complex vocalizations with food or water, both the number of complex vocalizations and the rate of vocalizations can be increased (Figure 1). An ultrasonic microphone mounted above the male's home cage records the vocalizations. Recording begins after the female rat is removed to isolate the male calls. Vocalizations can be viewed in real time for training or recorded and analyzed offline. By recording and acoustically analyzing vocalizations before and after vocal training, the effects of disease and restoration of normal function with training can be assessed. This model also allows us to relate the observed behavioral (vocal) improvements to changes in the brain and neuromuscular system.
Neuroscience, Issue 54, ultrasonic vocalization, rat, aging, Parkinson disease, exercise, 6-hydroxydopamine, voice disorders, voice therapy
2835
Play Button
A Protocol for Comprehensive Assessment of Bulbar Dysfunction in Amyotrophic Lateral Sclerosis (ALS)
Authors: Yana Yunusova, Jordan R. Green, Jun Wang, Gary Pattee, Lorne Zinman.
Institutions: University of Toronto, Sunnybrook Health Science Centre, University of Nebraska-Lincoln, University of Nebraska Medical Center, University of Toronto.
Improved methods for assessing bulbar impairment are necessary for expediting diagnosis of bulbar dysfunction in ALS, for predicting disease progression across speech subsystems, and for addressing the critical need for sensitive outcome measures for ongoing experimental treatment trials. To address this need, we are obtaining longitudinal profiles of bulbar impairment in 100 individuals based on a comprehensive instrumentation-based assessment that yield objective measures. Using instrumental approaches to quantify speech-related behaviors is very important in a field that has primarily relied on subjective, auditory-perceptual forms of speech assessment1. Our assessment protocol measures performance across all of the speech subsystems, which include respiratory, phonatory (laryngeal), resonatory (velopharyngeal), and articulatory. The articulatory subsystem is divided into the facial components (jaw and lip), and the tongue. Prior research has suggested that each speech subsystem responds differently to neurological diseases such as ALS. The current protocol is designed to test the performance of each speech subsystem as independently from other subsystems as possible. The speech subsystems are evaluated in the context of more global changes to speech performance. These speech system level variables include speaking rate and intelligibility of speech. The protocol requires specialized instrumentation, and commercial and custom software. The respiratory, phonatory, and resonatory subsystems are evaluated using pressure-flow (aerodynamic) and acoustic methods. The articulatory subsystem is assessed using 3D motion tracking techniques. The objective measures that are used to quantify bulbar impairment have been well established in the speech literature and show sensitivity to changes in bulbar function with disease progression. The result of the assessment is a comprehensive, across-subsystem performance profile for each participant. The profile, when compared to the same measures obtained from healthy controls, is used for diagnostic purposes. Currently, we are testing the sensitivity and specificity of these measures for diagnosis of ALS and for predicting the rate of disease progression. In the long term, the more refined endophenotype of bulbar ALS derived from this work is expected to strengthen future efforts to identify the genetic loci of ALS and improve diagnostic and treatment specificity of the disease as a whole. The objective assessment that is demonstrated in this video may be used to assess a broad range of speech motor impairments, including those related to stroke, traumatic brain injury, multiple sclerosis, and Parkinson disease.
Medicine, Issue 48, speech, assessment, subsystems, bulbar function, amyotrophic lateral sclerosis
2422
Play Button
Assessment of Cerebral Lateralization in Children using Functional Transcranial Doppler Ultrasound (fTCD)
Authors: Dorothy V. M. Bishop, Nicholas A. Badcock, Georgina Holt.
Institutions: University of Oxford.
There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1. Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2. Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3. The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4, but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6. Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen.
Neuroscience, Issue 43, functional transcranial Doppler ultrasound, cerebral lateralization, language, child
2161
Play Button
Testing Sensory and Multisensory Function in Children with Autism Spectrum Disorder
Authors: Sarah H. Baum, Ryan A. Stevenson, Mark T. Wallace.
Institutions: Vanderbilt University Medical Center, University of Toronto, Vanderbilt University.
In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system's judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.
Behavior, Issue 98, Temporal processing, multisensory integration, psychophysics, computer based assessments, sensory deficits, autism spectrum disorder
52677
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.