JoVE Visualize What is visualize?
Related JoVE Video
 
Pubmed Article
Chimpanzee vocal signaling points to a multimodal origin of human language.
PLoS ONE
PUBLISHED: 03-22-2011
The evolutionary origin of human language and its neurobiological foundations has long been the object of intense scientific debate. Although a number of theories have been proposed, one particularly contentious model suggests that human language evolved from a manual gestural communication system in a common ape-human ancestor. Consistent with a gestural origins theory are data indicating that chimpanzees intentionally and referentially communicate via manual gestures, and the production of manual gestures, in conjunction with vocalizations, activates the chimpanzee Brocas area homologue--a region in the human brain that is critical for the planning and execution of language. However, it is not known if this activity observed in the chimpanzee Brocas area is the result of the chimpanzees producing manual communicative gestures, communicative sounds, or both. This information is critical for evaluating the theory that human language evolved from a strictly manual gestural system. To this end, we used positron emission tomography (PET) to examine the neural metabolic activity in the chimpanzee brain. We collected PET data in 4 subjects, all of whom produced manual communicative gestures. However, 2 of these subjects also produced so-called attention-getting vocalizations directed towards a human experimenter. Interestingly, only the two subjects that produced these attention-getting sounds showed greater mean metabolic activity in the Brocas area homologue as compared to a baseline scan. The two subjects that did not produce attention-getting sounds did not. These data contradict an exclusive "gestural origins" theory for they suggest that it is vocal signaling that selectively activates the Brocas area homologue in chimpanzees. In other words, the activity observed in the Brocas area homologue reflects the production of vocal signals by the chimpanzees, suggesting that this critical human language region was involved in vocal signaling in the common ancestor of both modern humans and chimpanzees.
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Published: 06-14-2014
ABSTRACT
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
26 Related JoVE Articles!
Play Button
Targeted Training of Ultrasonic Vocalizations in Aged and Parkinsonian Rats
Authors: Aaron M. Johnson, Emerald J. Doll, Laura M. Grant, Lauren Ringel, Jaime N. Shier, Michelle R. Ciucci.
Institutions: University of Wisconsin, University of Wisconsin.
Voice deficits are a common complication of both Parkinson disease (PD) and aging; they can significantly diminish quality of life by impacting communication abilities. 1, 2 Targeted training (speech/voice therapy) can improve specific voice deficits,3, 4 although the underlying mechanisms of behavioral interventions are not well understood. Systematic investigation of voice deficits and therapy should consider many factors that are difficult to control in humans, such as age, home environment, age post-onset of disease, severity of disease, and medications. The method presented here uses an animal model of vocalization that allows for systematic study of how underlying sensorimotor mechanisms change with targeted voice training. The ultrasonic recording and analysis procedures outlined in this protocol are applicable to any investigation of rodent ultrasonic vocalizations. The ultrasonic vocalizations of rodents are emerging as a valuable model to investigate the neural substrates of behavior.5-8 Both rodent and human vocalizations carry semiotic value and are produced by modifying an egressive airflow with a laryngeal constriction.9, 10 Thus, rodent vocalizations may be a useful model to study voice deficits in a sensorimotor context. Further, rat models allow us to study the neurobiological underpinnings of recovery from deficits with targeted training. To model PD we use Long-Evans rats (Charles River Laboratories International, Inc.) and induce parkinsonism by a unilateral infusion of 7 μg of 6-hydroxydopamine (6-OHDA) into the medial forebrain bundle which causes moderate to severe degeneration of presynaptic striatal neurons (for details see Ciucci, 2010).11, 12 For our aging model we use the Fischer 344/Brown Norway F1 (National Institute on Aging). Our primary method for eliciting vocalizations is to expose sexually-experienced male rats to sexually receptive female rats. When the male becomes interested in the female, the female is removed and the male continues to vocalize. By rewarding complex vocalizations with food or water, both the number of complex vocalizations and the rate of vocalizations can be increased (Figure 1). An ultrasonic microphone mounted above the male's home cage records the vocalizations. Recording begins after the female rat is removed to isolate the male calls. Vocalizations can be viewed in real time for training or recorded and analyzed offline. By recording and acoustically analyzing vocalizations before and after vocal training, the effects of disease and restoration of normal function with training can be assessed. This model also allows us to relate the observed behavioral (vocal) improvements to changes in the brain and neuromuscular system.
Neuroscience, Issue 54, ultrasonic vocalization, rat, aging, Parkinson disease, exercise, 6-hydroxydopamine, voice disorders, voice therapy
2835
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
3237
Play Button
Exploring Cognitive Functions in Babies, Children & Adults with Near Infrared Spectroscopy
Authors: Mark H. Shalinsky, Iouila Kovelman, Melody S. Berens, Laura-Ann Petitto.
Institutions: University of Michigan, Ann Arbor, University of Toronto Scarborough.
An explosion of functional Near Infrared Spectroscopy (fNIRS) studies investigating cortical activation in relation to higher cognitive processes, such as language1,2,3,4,5,6,7,8,9,10, memory11, and attention12 is underway worldwide involving adults, children and infants 3,4,13,14,15,16,17,18,19 with typical and atypical cognition20,21,22. The contemporary challenge of using fNIRS for cognitive neuroscience is to achieve systematic analyses of data such that they are universally interpretable23,24,25,26, and thus may advance important scientific questions about the functional organization and neural systems underlying human higher cognition. Existing neuroimaging technologies have either less robust temporal or spatial resolution. Event Related Potentials and Magneto Encephalography (ERP and MEG) have excellent temporal resolution, whereas Positron Emission Tomography and functional Magnetic Resonance Imaging (PET and fMRI) have better spatial resolution. Using non-ionizing wavelengths of light in the near-infrared range (700-1000 nm), where oxy-hemoglobin is preferentially absorbed by 680 nm and deoxy-hemoglobin is preferentially absorbed by 830 nm (e.g., indeed, the very wavelengths hardwired into the fNIRS Hitachi ETG-400 system illustrated here), fNIRS is well suited for studies of higher cognition because it has both good temporal resolution (~5s) without the use of radiation and good spatial resolution (~4 cm depth), and does not require participants to be in an enclosed structure27,28. Participants cortical activity can be assessed while comfortably seated in an ordinary chair (adults, children) or even seated in mom s lap (infants). Notably, NIRS is uniquely portable (the size of a desktop computer), virtually silent, and can tolerate a participants subtle movement. This is particularly outstanding for the neural study of human language, which necessarily has as one of its key components the movement of the mouth in speech production or the hands in sign language. The way in which the hemodynamic response is localized is by an array of laser emitters and detectors. Emitters emit a known intensity of non-ionizing light while detectors detect the amount reflected back from the cortical surface. The closer together the optodes, the greater the spatial resolution, whereas the further apart the optodes, the greater depth of penetration. For the fNIRS Hitachi ETG-4000 system optimal penetration / resolution the optode array is set to 2cm. Our goal is to demonstrate our method of acquiring and analyzing fNIRS data to help standardize the field and enable different fNIRS labs worldwide to have a common background.
Neuroscience, Issue 29, infant, child, Near Infrared Spectroscopy, fNIRS, optical tomography, cognitive neuroscience, psychology, brain, developmental cognitive neuroscience, analysis
1268
Play Button
Construction and Characterization of a Novel Vocal Fold Bioreactor
Authors: Aidan B. Zerdoum, Zhixiang Tong, Brendan Bachman, Xinqiao Jia.
Institutions: University of Delaware, University of Delaware.
In vitro engineering of mechanically active tissues requires the presentation of physiologically relevant mechanical conditions to cultured cells. To emulate the dynamic environment of vocal folds, a novel vocal fold bioreactor capable of producing vibratory stimulations at fundamental phonation frequencies is constructed and characterized. The device is composed of a function generator, a power amplifier, a speaker selector and parallel vibration chambers. Individual vibration chambers are created by sandwiching a custom-made silicone membrane between a pair of acrylic blocks. The silicone membrane not only serves as the bottom of the chamber but also provides a mechanism for securing the cell-laden scaffold. Vibration signals, generated by a speaker mounted underneath the bottom acrylic block, are transmitted to the membrane aerodynamically by the oscillating air. Eight identical vibration modules, fixed on two stationary metal bars, are housed in an anti-humidity chamber for long-term operation in a cell culture incubator. The vibration characteristics of the vocal fold bioreactor are analyzed non-destructively using a Laser Doppler Vibrometer (LDV). The utility of the dynamic culture device is demonstrated by culturing cellular constructs in the presence of 200-Hz sinusoidal vibrations with a mid-membrane displacement of 40 µm. Mesenchymal stem cells cultured in the bioreactor respond to the vibratory signals by altering the synthesis and degradation of vocal fold-relevant, extracellular matrix components. The novel bioreactor system presented herein offers an excellent in vitro platform for studying vibration-induced mechanotransduction and for the engineering of functional vocal fold tissues.
Bioengineering, Issue 90, vocal fold; bioreactor; speaker; silicone membrane; fibrous scaffold; mesenchymal stem cells; vibration; extracellular matrix
51594
Play Button
Investigating the Three-dimensional Flow Separation Induced by a Model Vocal Fold Polyp
Authors: Kelley C. Stewart, Byron D. Erath, Michael W. Plesniak.
Institutions: The George Washington University, Clarkson University.
The fluid-structure energy exchange process for normal speech has been studied extensively, but it is not well understood for pathological conditions. Polyps and nodules, which are geometric abnormalities that form on the medial surface of the vocal folds, can disrupt vocal fold dynamics and thus can have devastating consequences on a patient's ability to communicate. Our laboratory has reported particle image velocimetry (PIV) measurements, within an investigation of a model polyp located on the medial surface of an in vitro driven vocal fold model, which show that such a geometric abnormality considerably disrupts the glottal jet behavior. This flow field adjustment is a likely reason for the severe degradation of the vocal quality in patients with polyps. A more complete understanding of the formation and propagation of vortical structures from a geometric protuberance, such as a vocal fold polyp, and the resulting influence on the aerodynamic loadings that drive the vocal fold dynamics, is necessary for advancing the treatment of this pathological condition. The present investigation concerns the three-dimensional flow separation induced by a wall-mounted prolate hemispheroid with a 2:1 aspect ratio in cross flow, i.e. a model vocal fold polyp, using an oil-film visualization technique. Unsteady, three-dimensional flow separation and its impact of the wall pressure loading are examined using skin friction line visualization and wall pressure measurements.
Bioengineering, Issue 84, oil-flow visualization, vocal fold polyp, three-dimensional flow separation, aerodynamic pressure loadings
51080
Play Button
Dissection and Downstream Analysis of Zebra Finch Embryos at Early Stages of Development
Authors: Jessica R. Murray, Monika E. Stanciauskas, Tejas S. Aralere, Margaret S. Saha.
Institutions: College of William and Mary.
The zebra finch (Taeniopygiaguttata) has become an increasingly important model organism in many areas of research including toxicology1,2, behavior3, and memory and learning4,5,6. As the only songbird with a sequenced genome, the zebra finch has great potential for use in developmental studies; however, the early stages of zebra finch development have not been well studied. Lack of research in zebra finch development can be attributed to the difficulty of dissecting the small egg and embryo. The following dissection method minimizes embryonic tissue damage, which allows for investigation of morphology and gene expression at all stages of embryonic development. This permits both bright field and fluorescence quality imaging of embryos, use in molecular procedures such as in situ hybridization (ISH), cell proliferation assays, and RNA extraction for quantitative assays such as quantitative real-time PCR (qtRT-PCR). This technique allows investigators to study early stages of development that were previously difficult to access.
Developmental Biology, Issue 88, zebra finch (Taeniopygiaguttata), dissection, embryo, development, in situ hybridization, 5-ethynyl-2’-deoxyuridine (EdU)
51596
Play Button
Fluorescence-based Monitoring of PAD4 Activity via a Pro-fluorescence Substrate Analog
Authors: Mary J. Sabulski, Jonathan M. Fura, Marcos M. Pires.
Institutions: Lehigh University.
Post-translational modifications may lead to altered protein functional states by increasing the covalent variations on the side chains of many protein substrates. The histone tails represent one of the most heavily modified stretches within all human proteins. Peptidyl-arginine deiminase 4 (PAD4) has been shown to convert arginine residues into the non-genetically encoded citrulline residue. Few assays described to date have been operationally facile with satisfactory sensitivity. Thus, the lack of adequate assays has likely contributed to the absence of potent non-covalent PAD4 inhibitors. Herein a novel fluorescence-based assay that allows for the monitoring of PAD4 activity is described. A pro-fluorescent substrate analog was designed to link PAD4 enzymatic activity to fluorescence liberation upon the addition of the protease trypsin. It was shown that the assay is compatible with high-throughput screening conditions and has a strong signal-to-noise ratio. Furthermore, the assay can also be performed with crude cell lysates containing over-expressed PAD4.
Chemistry, Issue 93, PAD4, PADI4, citrullination, arginine, post-translational modification, HTS, assay, fluorescence, citrulline
52114
Play Button
Synthetic, Multi-Layer, Self-Oscillating Vocal Fold Model Fabrication
Authors: Preston R. Murray, Scott L. Thomson.
Institutions: Brigham Young University.
Sound for the human voice is produced via flow-induced vocal fold vibration. The vocal folds consist of several layers of tissue, each with differing material properties 1. Normal voice production relies on healthy tissue and vocal folds, and occurs as a result of complex coupling between aerodynamic, structural dynamic, and acoustic physical phenomena. Voice disorders affect up to 7.5 million annually in the United States alone 2 and often result in significant financial, social, and other quality-of-life difficulties. Understanding the physics of voice production has the potential to significantly benefit voice care, including clinical prevention, diagnosis, and treatment of voice disorders. Existing methods for studying voice production include in vivo experimentation using human and animal subjects, in vitro experimentation using excised larynges and synthetic models, and computational modeling. Owing to hazardous and difficult instrument access, in vivo experiments are severely limited in scope. Excised larynx experiments have the benefit of anatomical and some physiological realism, but parametric studies involving geometric and material property variables are limited. Further, they are typically only able to be vibrated for relatively short periods of time (typically on the order of minutes). Overcoming some of the limitations of excised larynx experiments, synthetic vocal fold models are emerging as a complementary tool for studying voice production. Synthetic models can be fabricated with systematic changes to geometry and material properties, allowing for the study of healthy and unhealthy human phonatory aerodynamics, structural dynamics, and acoustics. For example, they have been used to study left-right vocal fold asymmetry 3,4, clinical instrument development 5, laryngeal aerodynamics 6-9, vocal fold contact pressure 10, and subglottal acoustics 11 (a more comprehensive list can be found in Kniesburges et al. 12) Existing synthetic vocal fold models, however, have either been homogenous (one-layer models) or have been fabricated using two materials of differing stiffness (two-layer models). This approach does not allow for representation of the actual multi-layer structure of the human vocal folds 1 that plays a central role in governing vocal fold flow-induced vibratory response. Consequently, one- and two-layer synthetic vocal fold models have exhibited disadvantages 3,6,8 such as higher onset pressures than what are typical for human phonation (onset pressure is the minimum lung pressure required to initiate vibration), unnaturally large inferior-superior motion, and lack of a "mucosal wave" (a vertically-traveling wave that is characteristic of healthy human vocal fold vibration). In this paper, fabrication of a model with multiple layers of differing material properties is described. The model layers simulate the multi-layer structure of the human vocal folds, including epithelium, superficial lamina propria (SLP), intermediate and deep lamina propria (i.e., ligament; a fiber is included for anterior-posterior stiffness), and muscle (i.e., body) layers 1. Results are included that show that the model exhibits improved vibratory characteristics over prior one- and two-layer synthetic models, including onset pressure closer to human onset pressure, reduced inferior-superior motion, and evidence of a mucosal wave.
Bioengineering, Issue 58, Vocal folds, larynx, voice, speech, artificial biomechanical models
3498
Play Button
Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer
Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher.
Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour.
Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.
Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations
51438
Play Button
A Lightweight, Headphones-based System for Manipulating Auditory Feedback in Songbirds
Authors: Lukas A. Hoffmann, Conor W. Kelly, David A. Nicholson, Samuel J. Sober.
Institutions: Emory University, Emory University, Emory University.
Experimental manipulations of sensory feedback during complex behavior have provided valuable insights into the computations underlying motor control and sensorimotor plasticity1. Consistent sensory perturbations result in compensatory changes in motor output, reflecting changes in feedforward motor control that reduce the experienced feedback error. By quantifying how different sensory feedback errors affect human behavior, prior studies have explored how visual signals are used to recalibrate arm movements2,3 and auditory feedback is used to modify speech production4-7. The strength of this approach rests on the ability to mimic naturalistic errors in behavior, allowing the experimenter to observe how experienced errors in production are used to recalibrate motor output. Songbirds provide an excellent animal model for investigating the neural basis of sensorimotor control and plasticity8,9. The songbird brain provides a well-defined circuit in which the areas necessary for song learning are spatially separated from those required for song production, and neural recording and lesion studies have made significant advances in understanding how different brain areas contribute to vocal behavior9-12. However, the lack of a naturalistic error-correction paradigm - in which a known acoustic parameter is perturbed by the experimenter and then corrected by the songbird - has made it difficult to understand the computations underlying vocal learning or how different elements of the neural circuit contribute to the correction of vocal errors13. The technique described here gives the experimenter precise control over auditory feedback errors in singing birds, allowing the introduction of arbitrary sensory errors that can be used to drive vocal learning. Online sound-processing equipment is used to introduce a known perturbation to the acoustics of song, and a miniaturized headphones apparatus is used to replace a songbird's natural auditory feedback with the perturbed signal in real time. We have used this paradigm to perturb the fundamental frequency (pitch) of auditory feedback in adult songbirds, providing the first demonstration that adult birds maintain vocal performance using error correction14. The present protocol can be used to implement a wide range of sensory feedback perturbations (including but not limited to pitch shifts) to investigate the computational and neurophysiological basis of vocal learning.
Neuroscience, Issue 69, Anatomy, Physiology, Zoology, Behavior, Songbird, psychophysics, auditory feedback, biology, sensorimotor learning
50027
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
52374
Play Button
Quantitative and Temporal Control of Oxygen Microenvironment at the Single Islet Level
Authors: Joe Fu-Jiou Lo, Yong Wang, Zidong Li, Zhengtuo Zhao, Di Hu, David T. Eddington, Jose Oberholzer.
Institutions: University of Michigan-Dearborn, University of Illinois at Chicago, University of Illinois at Chicago.
Simultaneous oxygenation and monitoring of glucose stimulus-secretion coupling factors in a single technique is critical for modeling pathophysiological states of islet hypoxia, especially in transplant environments. Standard hypoxic chamber techniques cannot modulate both stimulations at the same time nor provide real-time monitoring of glucose stimulus-secretion coupling factors. To address these difficulties, we applied a multilayered microfluidic technique to integrate both aqueous and gas phase modulations via a diffusion membrane. This creates a stimulation sandwich around the microscaled islets within the transparent polydimethylsiloxane (PDMS) device, enabling monitoring of the aforementioned coupling factors via fluorescence microscopy. Additionally, the gas input is controlled by a pair of microdispensers, providing quantitative, sub-minute modulations of oxygen between 0-21%. This intermittent hypoxia is applied to investigate a new phenomenon of islet preconditioning. Moreover, armed with multimodal microscopy, we were able to look at detailed calcium and KATP channel dynamics during these hypoxic events. We envision microfluidic hypoxia, especially this simultaneous dual phase technique, as a valuable tool in studying islets as well as many ex vivo tissues.
Bioengineering, Issue 81, Islets of Langerhans, Microfluidics, Microfluidic Analytical Techniques, Microfluidic Analytical Techniques, oxygen, islet, hypoxia, intermittent hypoxia
50616
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
51047
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
4375
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
51735
Play Button
Getting to Compliance in Forced Exercise in Rodents: A Critical Standard to Evaluate Exercise Impact in Aging-related Disorders and Disease
Authors: Jennifer C. Arnold, Michael F. Salvatore.
Institutions: Louisiana State University Health Sciences Center.
There is a major increase in the awareness of the positive impact of exercise on improving several disease states with neurobiological basis; these include improving cognitive function and physical performance. As a result, there is an increase in the number of animal studies employing exercise. It is argued that one intrinsic value of forced exercise is that the investigator has control over the factors that can influence the impact of exercise on behavioral outcomes, notably exercise frequency, duration, and intensity of the exercise regimen. However, compliance in forced exercise regimens may be an issue, particularly if potential confounds of employing foot-shock are to be avoided. It is also important to consider that since most cognitive and locomotor impairments strike in the aged individual, determining impact of exercise on these impairments should consider using aged rodents with a highest possible level of compliance to ensure minimal need for test subjects. Here, the pertinent steps and considerations necessary to achieve nearly 100% compliance to treadmill exercise in an aged rodent model will be presented and discussed. Notwithstanding the particular exercise regimen being employed by the investigator, our protocol should be of use to investigators that are particularly interested in the potential impact of forced exercise on aging-related impairments, including aging-related Parkinsonism and Parkinson’s disease.
Behavior, Issue 90, Exercise, locomotor, Parkinson’s disease, aging, treadmill, bradykinesia, Parkinsonism
51827
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
51705
Play Button
Transferring Cognitive Tasks Between Brain Imaging Modalities: Implications for Task Design and Results Interpretation in fMRI Studies
Authors: Tracy Warbrick, Martina Reske, N. Jon Shah.
Institutions: Research Centre Jülich GmbH, Research Centre Jülich GmbH.
As cognitive neuroscience methods develop, established experimental tasks are used with emerging brain imaging modalities. Here transferring a paradigm (the visual oddball task) with a long history of behavioral and electroencephalography (EEG) experiments to a functional magnetic resonance imaging (fMRI) experiment is considered. The aims of this paper are to briefly describe fMRI and when its use is appropriate in cognitive neuroscience; illustrate how task design can influence the results of an fMRI experiment, particularly when that task is borrowed from another imaging modality; explain the practical aspects of performing an fMRI experiment. It is demonstrated that manipulating the task demands in the visual oddball task results in different patterns of blood oxygen level dependent (BOLD) activation. The nature of the fMRI BOLD measure means that many brain regions are found to be active in a particular task. Determining the functions of these areas of activation is very much dependent on task design and analysis. The complex nature of many fMRI tasks means that the details of the task and its requirements need careful consideration when interpreting data. The data show that this is particularly important in those tasks relying on a motor response as well as cognitive elements and that covert and overt responses should be considered where possible. Furthermore, the data show that transferring an EEG paradigm to an fMRI experiment needs careful consideration and it cannot be assumed that the same paradigm will work equally well across imaging modalities. It is therefore recommended that the design of an fMRI study is pilot tested behaviorally to establish the effects of interest and then pilot tested in the fMRI environment to ensure appropriate design, implementation and analysis for the effects of interest.
Behavior, Issue 91, fMRI, task design, data interpretation, cognitive neuroscience, visual oddball task, target detection
51793
Play Button
Creating Dynamic Images of Short-lived Dopamine Fluctuations with lp-ntPET: Dopamine Movies of Cigarette Smoking
Authors: Evan D. Morris, Su Jin Kim, Jenna M. Sullivan, Shuo Wang, Marc D. Normandin, Cristian C. Constantinescu, Kelly P. Cosgrove.
Institutions: Yale University, Yale University, Yale University, Yale University, Massachusetts General Hospital, University of California, Irvine.
We describe experimental and statistical steps for creating dopamine movies of the brain from dynamic PET data. The movies represent minute-to-minute fluctuations of dopamine induced by smoking a cigarette. The smoker is imaged during a natural smoking experience while other possible confounding effects (such as head motion, expectation, novelty, or aversion to smoking repeatedly) are minimized. We present the details of our unique analysis. Conventional methods for PET analysis estimate time-invariant kinetic model parameters which cannot capture short-term fluctuations in neurotransmitter release. Our analysis - yielding a dopamine movie - is based on our work with kinetic models and other decomposition techniques that allow for time-varying parameters 1-7. This aspect of the analysis - temporal-variation - is key to our work. Because our model is also linear in parameters, it is practical, computationally, to apply at the voxel level. The analysis technique is comprised of five main steps: pre-processing, modeling, statistical comparison, masking and visualization. Preprocessing is applied to the PET data with a unique 'HYPR' spatial filter 8 that reduces spatial noise but preserves critical temporal information. Modeling identifies the time-varying function that best describes the dopamine effect on 11C-raclopride uptake. The statistical step compares the fit of our (lp-ntPET) model 7 to a conventional model 9. Masking restricts treatment to those voxels best described by the new model. Visualization maps the dopamine function at each voxel to a color scale and produces a dopamine movie. Interim results and sample dopamine movies of cigarette smoking are presented.
Behavior, Issue 78, Neuroscience, Neurobiology, Molecular Biology, Biomedical Engineering, Medicine, Anatomy, Physiology, Image Processing, Computer-Assisted, Receptors, Dopamine, Dopamine, Functional Neuroimaging, Binding, Competitive, mathematical modeling (systems analysis), Neurotransmission, transient, dopamine release, PET, modeling, linear, time-invariant, smoking, F-test, ventral-striatum, clinical techniques
50358
Play Button
A Dual Tracer PET-MRI Protocol for the Quantitative Measure of Regional Brain Energy Substrates Uptake in the Rat
Authors: Maggie Roy, Scott Nugent, Sébastien Tremblay, Maxime Descoteaux, Jean-François Beaudoin, Luc Tremblay, Roger Lecomte, Stephen C Cunnane.
Institutions: Université de Sherbrooke, Université de Sherbrooke, Université de Sherbrooke, Université de Sherbrooke.
We present a method for comparing the uptake of the brain's two key energy substrates: glucose and ketones (acetoacetate [AcAc] in this case) in the rat. The developed method is a small-animal positron emission tomography (PET) protocol, in which 11C-AcAc and 18F-fluorodeoxyglucose (18F-FDG) are injected sequentially in each animal. This dual tracer PET acquisition is possible because of the short half-life of 11C (20.4 min). The rats also undergo a magnetic resonance imaging (MRI) acquisition seven days before the PET protocol. Prior to image analysis, PET and MRI images are coregistered to allow the measurement of regional cerebral uptake (cortex, hippocampus, striatum, and cerebellum). A quantitative measure of 11C-AcAc and 18F-FDG brain uptake (cerebral metabolic rate; μmol/100 g/min) is determined by kinetic modeling using the image-derived input function (IDIF) method. Our new dual tracer PET protocol is robust and flexible; the two tracers used can be replaced by different radiotracers to evaluate other processes in the brain. Moreover, our protocol is applicable to the study of brain fuel supply in multiple conditions such as normal aging and neurodegenerative pathologies such as Alzheimer's and Parkinson's diseases.
Neuroscience, Issue 82, positron emission tomography (PET), 18F-fluorodeoxyglucose, 11C-acetoacetate, magnetic resonance imaging (MRI), kinetic modeling, cerebral metabolic rate, rat
50761
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
4331
Play Button
Basics of Multivariate Analysis in Neuroimaging Data
Authors: Christian Georg Habeck.
Institutions: Columbia University.
Multivariate analysis techniques for neuroimaging data have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques1,5,6,7,8,9. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address interregional correlation in the brain. Multivariate approaches can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent corrections for voxel-wise multiple comparisons. Further, multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The current article is an attempt at a didactic introduction of multivariate techniques for the novice. A conceptual introduction is followed with a very simple application to a diagnostic data set from the Alzheimer s Disease Neuroimaging Initiative (ADNI), clearly demonstrating the superior performance of the multivariate approach.
JoVE Neuroscience, Issue 41, fMRI, PET, multivariate analysis, cognitive neuroscience, clinical neuroscience
1988
Play Button
Coherence between Brain Cortical Function and Neurocognitive Performance during Changed Gravity Conditions
Authors: Vera Brümmer, Stefan Schneider, Tobias Vogt, Heiko Strüder, Heather Carnahan, Christopher D. Askew, Roland Csuhaj.
Institutions: German Sport University Cologne, University of Toronto, Queensland University of Technology, Gilching, Germany.
Previous studies of cognitive, mental and/or motor processes during short-, medium- and long-term weightlessness have only been descriptive in nature, and focused on psychological aspects. Until now, objective observation of neurophysiological parameters has not been carried out - undoubtedly because the technical and methodological means have not been available -, investigations into the neurophysiological effects of weightlessness are in their infancy (Schneider et al. 2008). While imaging techniques such as positron emission tomography (PET) and magnetic resonance imaging (MRI) would be hardly applicable in space, the non-invasive near-infrared spectroscopy (NIRS) technique represents a method of mapping hemodynamic processes in the brain in real time that is both relatively inexpensive and that can be employed even under extreme conditions. The combination with electroencephalography (EEG) opens up the possibility of following the electrocortical processes under changing gravity conditions with a finer temporal resolution as well as with deeper localization, for instance with electrotomography (LORETA). Previous studies showed an increase of beta frequency activity under normal gravity conditions and a decrease under weightlessness conditions during a parabolic flight (Schneider et al. 2008a+b). Tilt studies revealed different changes in brain function, which let suggest, that changes in parabolic flight might reflect emotional processes rather than hemodynamic changes. However, it is still unclear whether these are effects of changed gravity or hemodynamic changes within the brain. Combining EEG/LORETA and NIRS should for the first time make it possible to map the effect of weightlessness and reduced gravity on both hemodynamic and electrophysiological processes in the brain. Initially, this is to be done as part of a feasibility study during a parabolic flight. Afterwards, it is also planned to use both techniques during medium- and long-term space flight. It can be assumed that the long-term redistribution of the blood volume and the associated increase in the supply of oxygen to the brain will lead to changes in the central nervous system that are also responsible for anaemic processes, and which can in turn reduce performance (De Santo et al. 2005), which means that they could be crucial for the success and safety of a mission (Genik et al. 2005, Ellis 2000). Depending on these results, it will be necessary to develop and employ extensive countermeasures. Initial results for the MARS500 study suggest that, in addition to their significance in the context of the cardiovascular and locomotor systems, sport and physical activity can play a part in improving neurocognitive parameters. Before this can be fully established, however, it seems necessary to learn more about the influence of changing gravity conditions on neurophysiological processes and associated neurocognitive impairment.
Neuroscience, Issue 51, EEG, NIRS, electrotomography, parabolic flight, weightlessness, imaging, cognitive performance
2670
Play Button
Probing the Brain in Autism Using fMRI and Diffusion Tensor Imaging
Authors: Rajesh K. Kana, Donna L. Murdaugh, Lauren E. Libero, Mark R. Pennick, Heather M. Wadsworth, Rishi Deshpande, Christi P. Hu.
Institutions: University of Alabama at Birmingham.
Newly emerging theories suggest that the brain does not function as a cohesive unit in autism, and this discordance is reflected in the behavioral symptoms displayed by individuals with autism. While structural neuroimaging findings have provided some insights into brain abnormalities in autism, the consistency of such findings is questionable. Functional neuroimaging, on the other hand, has been more fruitful in this regard because autism is a disorder of dynamic processing and allows examination of communication between cortical networks, which appears to be where the underlying problem occurs in autism. Functional connectivity is defined as the temporal correlation of spatially separate neurological events1. Findings from a number of recent fMRI studies have supported the idea that there is weaker coordination between different parts of the brain that should be working together to accomplish complex social or language problems2,3,4,5,6. One of the mysteries of autism is the coexistence of deficits in several domains along with relatively intact, sometimes enhanced, abilities. Such complex manifestation of autism calls for a global and comprehensive examination of the disorder at the neural level. A compelling recent account of the brain functioning in autism, the cortical underconnectivity theory,2,7 provides an integrating framework for the neurobiological bases of autism. The cortical underconnectivity theory of autism suggests that any language, social, or psychological function that is dependent on the integration of multiple brain regions is susceptible to disruption as the processing demand increases. In autism, the underfunctioning of integrative circuitry in the brain may cause widespread underconnectivity. In other words, people with autism may interpret information in a piecemeal fashion at the expense of the whole. Since cortical underconnectivity among brain regions, especially the frontal cortex and more posterior areas 3,6, has now been relatively well established, we can begin to further understand brain connectivity as a critical component of autism symptomatology. A logical next step in this direction is to examine the anatomical connections that may mediate the functional connections mentioned above. Diffusion Tensor Imaging (DTI) is a relatively novel neuroimaging technique that helps probe the diffusion of water in the brain to infer the integrity of white matter fibers. In this technique, water diffusion in the brain is examined in several directions using diffusion gradients. While functional connectivity provides information about the synchronization of brain activation across different brain areas during a task or during rest, DTI helps in understanding the underlying axonal organization which may facilitate the cross-talk among brain areas. This paper will describe these techniques as valuable tools in understanding the brain in autism and the challenges involved in this line of research.
Medicine, Issue 55, Functional magnetic resonance imaging (fMRI), MRI, Diffusion tensor imaging (DTI), Functional Connectivity, Neuroscience, Developmental disorders, Autism, Fractional Anisotropy
3178
Play Button
Brain Imaging Investigation of the Neural Correlates of Observing Virtual Social Interactions
Authors: Keen Sung, Sanda Dolcos, Sophie Flor-Henry, Crystal Zhou, Claudia Gasior, Jennifer Argo, Florin Dolcos.
Institutions: University of Alberta, University of Illinois, University of Alberta, University of Alberta, University of Alberta, University of Illinois at Urbana-Champaign, University of Illinois at Urbana-Champaign.
The ability to gauge social interactions is crucial in the assessment of others’ intentions. Factors such as facial expressions and body language affect our decisions in personal and professional life alike 1. These "friend or foe" judgements are often based on first impressions, which in turn may affect our decisions to "approach or avoid". Previous studies investigating the neural correlates of social cognition tended to use static facial stimuli 2. Here, we illustrate an experimental design in which whole-body animated characters were used in conjunction with functional magnetic resonance imaging (fMRI) recordings. Fifteen participants were presented with short movie-clips of guest-host interactions in a business setting, while fMRI data were recorded; at the end of each movie, participants also provided ratings of the host behaviour. This design mimics more closely real-life situations, and hence may contribute to better understanding of the neural mechanisms of social interactions in healthy behaviour, and to gaining insight into possible causes of deficits in social behaviour in such clinical conditions as social anxiety and autism 3.
Neuroscience, Issue 53, Social Perception, Social Knowledge, Social Cognition Network, Non-Verbal Communication, Decision-Making, Event-Related fMRI
2379
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.