JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Listening to Puns Elicits the Co-Activation of Alternative Homophone Meanings during Language Production.
PUBLISHED: 06-27-2015
Recent evidence suggests that lexical-semantic activation spread during language production can be dynamically shaped by contextual factors. In this study we investigated whether semantic processing modes can also affect lexical-semantic activation during word production. Specifically, we tested whether the processing of linguistic ambiguities, presented in the form of puns, has an influence on the co-activation of unrelated meanings of homophones in a subsequent language production task. In a picture-word interference paradigm with word distractors that were semantically related or unrelated to the non-depicted meanings of homophones we found facilitation induced by related words only when participants listened to puns before object naming, but not when they heard jokes with unambiguous linguistic stimuli. This finding suggests that a semantic processing mode of ambiguity perception can induce the co-activation of alternative homophone meanings during speech planning.
Numerous studies have emerged recently that demonstrate the possibility of modulating, and in some cases enhancing, cognitive processes by exciting brain regions involved in working memory and attention using transcranial electrical brain stimulation. Some researchers now believe the cerebellum supports cognition, possibly via a remote neuromodulatory effect on the prefrontal cortex. This paper describes a procedure for investigating a role for the cerebellum in cognition using transcranial direct current stimulation (tDCS), and a selection of information-processing tasks of varying task difficulty, which have previously been shown to involve working memory, attention and cerebellar functioning. One task is called the Paced Auditory Serial Addition Task (PASAT) and the other a novel variant of this task called the Paced Auditory Serial Subtraction Task (PASST). A verb generation task and its two controls (noun and verb reading) were also investigated. All five tasks were performed by three separate groups of participants, before and after the modulation of cortico-cerebellar connectivity using anodal, cathodal or sham tDCS over the right cerebellar cortex. The procedure demonstrates how performance (accuracy, verbal response latency and variability) could be selectively improved after cathodal stimulation, but only during tasks that the participants rated as difficult, and not easy. Performance was unchanged by anodal or sham stimulation. These findings demonstrate a role for the cerebellum in cognition, whereby activity in the left prefrontal cortex is likely dis-inhibited by cathodal tDCS over the right cerebellar cortex. Transcranial brain stimulation is growing in popularity in various labs and clinics. However, the after-effects of tDCS are inconsistent between individuals and not always polarity-specific, and may even be task- or load-specific, all of which requires further study. Future efforts might also be guided towards neuro-enhancement in cerebellar patients presenting with cognitive impairment once a better understanding of brain stimulation mechanisms has emerged.
17 Related JoVE Articles!
Play Button
Portable Intermodal Preferential Looking (IPL): Investigating Language Comprehension in Typically Developing Toddlers and Young Children with Autism
Authors: Letitia R. Naigles, Andrea T. Tovar.
Institutions: University of Connecticut.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
Medicine, Issue 70, Neuroscience, Psychology, Behavior, Intermodal preferential looking, language comprehension, children with autism, child development, autism
Play Button
Practical Methodology of Cognitive Tasks Within a Navigational Assessment
Authors: Manon Robillard, Chantal Mayer-Crittenden, Annie Roy-Charland, Michèle Minor-Corriveau, Roxanne Bélanger.
Institutions: Laurentian University, Laurentian University.
This paper describes an approach for measuring navigation accuracy relative to cognitive skills. The methodology behind the assessment will thus be clearly outlined in a step-by-step manner. Navigational skills are important when trying to find symbols within a speech-generating device (SGD) that has a dynamic screen and taxonomical organization. The following skills have been found to impact children’s ability to find symbols when navigating within the levels of an SGD: sustained attention, categorization, cognitive flexibility, and fluid reasoning1,2. According to past studies, working memory was not correlated with navigation1,2. The materials needed for this method include a computerized tablet, an augmentative and alternative communication application, a booklet of symbols, and the Leiter International Performance Scale-Revised (Leiter-R)3. This method has been used in two previous studies. Robillard, Mayer-Crittenden, Roy-Charland, Minor-Corriveau and Bélanger1 assessed typically developing children, while Rondeau, Robillard and Roy-Charland2 assessed children and adolescents with a diagnosis of Autism Spectrum Disorder. The direct observation of this method will facilitate the replication of this study for researchers. It will also help clinicians that work with children who have complex communication needs to determine the children’s ability to navigate an SGD with taxonomical categorization.
Behavior, Issue 100, Augmentative and alternative communication, navigation, cognition, assessment, speech-language pathology, children
Play Button
Using Eye Movements to Evaluate the Cognitive Processes Involved in Text Comprehension
Authors: Gary E. Raney, Spencer J. Campbell, Joanna C. Bovee.
Institutions: University of Illinois at Chicago.
The present article describes how to use eye tracking methodologies to study the cognitive processes involved in text comprehension. Measuring eye movements during reading is one of the most precise methods for measuring moment-by-moment (online) processing demands during text comprehension. Cognitive processing demands are reflected by several aspects of eye movement behavior, such as fixation duration, number of fixations, and number of regressions (returning to prior parts of a text). Important properties of eye tracking equipment that researchers need to consider are described, including how frequently the eye position is measured (sampling rate), accuracy of determining eye position, how much head movement is allowed, and ease of use. Also described are properties of stimuli that influence eye movements that need to be controlled in studies of text comprehension, such as the position, frequency, and length of target words. Procedural recommendations related to preparing the participant, setting up and calibrating the equipment, and running a study are given. Representative results are presented to illustrate how data can be evaluated. Although the methodology is described in terms of reading comprehension, much of the information presented can be applied to any study in which participants read verbal stimuli.
Behavior, Issue 83, Eye movements, Eye tracking, Text comprehension, Reading, Cognition
Play Button
Making Sense of Listening: The IMAP Test Battery
Authors: Johanna G. Barry, Melanie A. Ferguson, David R. Moore.
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Play Button
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Authors: Magdalena W. Sliwinska, Sylvia Vitello, Joseph T. Devlin.
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89, Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
Play Button
Investigating the Effects of Antipsychotics and Schizotypy on the N400 Using Event-Related Potentials and Semantic Categorization
Authors: Vivian Gu, Ola Mohamed Ali, Katherine L'Abbée Lacas, J. Bruno Debruille.
Institutions: McGill University, McGill University, McGill University, McGill University.
Within the field of cognitive neuroscience, functional magnetic resonance imaging (fMRI) is a popular method of visualizing brain function. This is in part because of its excellent spatial resolution, which allows researchers to identify brain areas associated with specific cognitive processes. However, in the quest to localize brain functions, it is relevant to note that many cognitive, sensory, and motor processes have temporal distinctions that are imperative to capture, an aspect that is left unfulfilled by fMRI’s suboptimal temporal resolution. To better understand cognitive processes, it is thus advantageous to utilize event-related potential (ERP) recording as a method of gathering information about the brain. Some of its advantages include its fantastic temporal resolution, which gives researchers the ability to follow the activity of the brain down to the millisecond. It also directly indexes both excitatory and inhibitory post-synaptic potentials by which most brain computations are performed. This sits in contrast to fMRI, which captures an index of metabolic activity. Further, the non-invasive ERP method does not require a contrast condition: raw ERPs can be examined for just one experimental condition, a distinction from fMRI where control conditions must be subtracted from the experimental condition, leading to uncertainty in associating observations with experimental or contrast conditions. While it is limited by its poor spatial and subcortical activity resolution, ERP recordings’ utility, relative cost-effectiveness, and associated advantages offer strong rationale for its use in cognitive neuroscience to track rapid temporal changes in neural activity. In an effort to foster increase in its use as a research imaging method, and to ensure proper and accurate data collection, the present article will outline – in the framework of a paradigm using semantic categorization to examine the effects of antipsychotics and schizotypy on the N400 – the procedure and key aspects associated with ERP data acquisition.
Behavior, Issue 93, Electrical brain activity, Semantic categorization, Event-related brain potentials, Neuroscience, Cognition, Psychiatry, Antipsychotic medication, N400, Schizotypy, Schizophrenia.
Play Button
A Cognitive Paradigm to Investigate Interference in Working Memory by Distractions and Interruptions
Authors: Jacki Janowich, Jyoti Mishra, Adam Gazzaley.
Institutions: University of New Mexico, University of California, San Francisco, University of California, San Francisco, University of California, San Francisco.
Goal-directed behavior is often impaired by interference from the external environment, either in the form of distraction by irrelevant information that one attempts to ignore, or by interrupting information that demands attention as part of another (secondary) task goal. Both forms of external interference have been shown to detrimentally impact the ability to maintain information in working memory (WM). Emerging evidence suggests that these different types of external interference exert different effects on behavior and may be mediated by distinct neural mechanisms. Better characterizing the distinct neuro-behavioral impact of irrelevant distractions versus attended interruptions is essential for advancing an understanding of top-down attention, resolution of external interference, and how these abilities become degraded in healthy aging and in neuropsychiatric conditions. This manuscript describes a novel cognitive paradigm developed the Gazzaley lab that has now been modified into several distinct versions used to elucidate behavioral and neural correlates of interference, by to-be-ignored distractors versus to-be-attended interruptors. Details are provided on variants of this paradigm for investigating interference in visual and auditory modalities, at multiple levels of stimulus complexity, and with experimental timing optimized for electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) studies. In addition, data from younger and older adult participants obtained using this paradigm is reviewed and discussed in the context of its relationship with the broader literatures on external interference and age-related neuro-behavioral changes in resolving interference in working memory.
Behavior, Issue 101, Attention, interference, distraction, interruption, working memory, aging, multi-tasking, top-down attention, EEG, fMRI
Play Button
A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets
Authors: Zhengang Lu, Jessica Goold, Ming Meng.
Institutions: Dartmouth College.
When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. 
Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics
Play Button
Transcranial Direct Current Stimulation and Simultaneous Functional Magnetic Resonance Imaging
Authors: Marcus Meinzer, Robert Lindenberg, Robert Darkow, Lena Ulm, David Copland, Agnes Flöel.
Institutions: University of Queensland, Charité Universitätsmedizin.
Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique that uses weak electrical currents administered to the scalp to manipulate cortical excitability and, consequently, behavior and brain function. In the last decade, numerous studies have addressed short-term and long-term effects of tDCS on different measures of behavioral performance during motor and cognitive tasks, both in healthy individuals and in a number of different patient populations. So far, however, little is known about the neural underpinnings of tDCS-action in humans with regard to large-scale brain networks. This issue can be addressed by combining tDCS with functional brain imaging techniques like functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). In particular, fMRI is the most widely used brain imaging technique to investigate the neural mechanisms underlying cognition and motor functions. Application of tDCS during fMRI allows analysis of the neural mechanisms underlying behavioral tDCS effects with high spatial resolution across the entire brain. Recent studies using this technique identified stimulation induced changes in task-related functional brain activity at the stimulation site and also in more distant brain regions, which were associated with behavioral improvement. In addition, tDCS administered during resting-state fMRI allowed identification of widespread changes in whole brain functional connectivity. Future studies using this combined protocol should yield new insights into the mechanisms of tDCS action in health and disease and new options for more targeted application of tDCS in research and clinical settings. The present manuscript describes this novel technique in a step-by-step fashion, with a focus on technical aspects of tDCS administered during fMRI.
Behavior, Issue 86, noninvasive brain stimulation, transcranial direct current stimulation (tDCS), anodal stimulation (atDCS), cathodal stimulation (ctDCS), neuromodulation, task-related fMRI, resting-state fMRI, functional magnetic resonance imaging (fMRI), electroencephalography (EEG), inferior frontal gyrus (IFG)
Play Button
Stimulating the Lip Motor Cortex with Transcranial Magnetic Stimulation
Authors: Riikka Möttönen, Jack Rogers, Kate E. Watkins.
Institutions: University of Oxford.
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
Behavior, Issue 88, electromyography, motor cortex, motor evoked potential, motor excitability, speech, repetitive TMS, rTMS, virtual lesion, transcranial magnetic stimulation
Play Button
Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer
Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher.
Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour.
Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.
Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Utilizing Repetitive Transcranial Magnetic Stimulation to Improve Language Function in Stroke Patients with Chronic Non-fluent Aphasia
Authors: Gabriella Garcia, Catherine Norise, Olufunsho Faseyitan, Margaret A. Naeser, Roy H. Hamilton.
Institutions: University of Pennsylvania , University of Pennsylvania , Veterans Affairs Boston Healthcare System, Boston University School of Medicine, Boston University School of Medicine.
Transcranial magnetic stimulation (TMS) has been shown to significantly improve language function in patients with non-fluent aphasia1. In this experiment, we demonstrate the administration of low-frequency repetitive TMS (rTMS) to an optimal stimulation site in the right hemisphere in patients with chronic non-fluent aphasia. A battery of standardized language measures is administered in order to assess baseline performance. Patients are subsequently randomized to either receive real rTMS or initial sham stimulation. Patients in the real stimulation undergo a site-finding phase, comprised of a series of six rTMS sessions administered over five days; stimulation is delivered to a different site in the right frontal lobe during each of these sessions. Each site-finding session consists of 600 pulses of 1 Hz rTMS, preceded and followed by a picture-naming task. By comparing the degree of transient change in naming ability elicited by stimulation of candidate sites, we are able to locate the area of optimal response for each individual patient. We then administer rTMS to this site during the treatment phase. During treatment, patients undergo a total of ten days of stimulation over the span of two weeks; each session is comprised of 20 min of 1 Hz rTMS delivered at 90% resting motor threshold. Stimulation is paired with an fMRI-naming task on the first and last days of treatment. After the treatment phase is complete, the language battery obtained at baseline is repeated two and six months following stimulation in order to identify rTMS-induced changes in performance. The fMRI-naming task is also repeated two and six months following treatment. Patients who are randomized to the sham arm of the study undergo sham site-finding, sham treatment, fMRI-naming studies, and repeat language testing two months after completing sham treatment. Sham patients then cross over into the real stimulation arm, completing real site-finding, real treatment, fMRI, and two- and six-month post-stimulation language testing.
Medicine, Issue 77, Neurobiology, Neuroscience, Anatomy, Physiology, Biomedical Engineering, Molecular Biology, Neurology, Stroke, Aphasia, Transcranial Magnetic Stimulation, TMS, language, neurorehabilitation, optimal site-finding, functional magnetic resonance imaging, fMRI, brain, stimulation, imaging, clinical techniques, clinical applications
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Recording Human Electrocorticographic (ECoG) Signals for Neuroscientific Research and Real-time Functional Cortical Mapping
Authors: N. Jeremy Hill, Disha Gupta, Peter Brunner, Aysegul Gunduz, Matthew A. Adamo, Anthony Ritaccio, Gerwin Schalk.
Institutions: New York State Department of Health, Albany Medical College, Albany Medical College, Washington University, Rensselaer Polytechnic Institute, State University of New York at Albany, University of Texas at El Paso .
Neuroimaging studies of human cognitive, sensory, and motor processes are usually based on noninvasive techniques such as electroencephalography (EEG), magnetoencephalography or functional magnetic-resonance imaging. These techniques have either inherently low temporal or low spatial resolution, and suffer from low signal-to-noise ratio and/or poor high-frequency sensitivity. Thus, they are suboptimal for exploring the short-lived spatio-temporal dynamics of many of the underlying brain processes. In contrast, the invasive technique of electrocorticography (ECoG) provides brain signals that have an exceptionally high signal-to-noise ratio, less susceptibility to artifacts than EEG, and a high spatial and temporal resolution (i.e., <1 cm/<1 millisecond, respectively). ECoG involves measurement of electrical brain signals using electrodes that are implanted subdurally on the surface of the brain. Recent studies have shown that ECoG amplitudes in certain frequency bands carry substantial information about task-related activity, such as motor execution and planning1, auditory processing2 and visual-spatial attention3. Most of this information is captured in the high gamma range (around 70-110 Hz). Thus, gamma activity has been proposed as a robust and general indicator of local cortical function1-5. ECoG can also reveal functional connectivity and resolve finer task-related spatial-temporal dynamics, thereby advancing our understanding of large-scale cortical processes. It has especially proven useful for advancing brain-computer interfacing (BCI) technology for decoding a user's intentions to enhance or improve communication6 and control7. Nevertheless, human ECoG data are often hard to obtain because of the risks and limitations of the invasive procedures involved, and the need to record within the constraints of clinical settings. Still, clinical monitoring to localize epileptic foci offers a unique and valuable opportunity to collect human ECoG data. We describe our methods for collecting recording ECoG, and demonstrate how to use these signals for important real-time applications such as clinical mapping and brain-computer interfacing. Our example uses the BCI2000 software platform8,9 and the SIGFRIED10 method, an application for real-time mapping of brain functions. This procedure yields information that clinicians can subsequently use to guide the complex and laborious process of functional mapping by electrical stimulation. Prerequisites and Planning: Patients with drug-resistant partial epilepsy may be candidates for resective surgery of an epileptic focus to minimize the frequency of seizures. Prior to resection, the patients undergo monitoring using subdural electrodes for two purposes: first, to localize the epileptic focus, and second, to identify nearby critical brain areas (i.e., eloquent cortex) where resection could result in long-term functional deficits. To implant electrodes, a craniotomy is performed to open the skull. Then, electrode grids and/or strips are placed on the cortex, usually beneath the dura. A typical grid has a set of 8 x 8 platinum-iridium electrodes of 4 mm diameter (2.3 mm exposed surface) embedded in silicon with an inter-electrode distance of 1cm. A strip typically contains 4 or 6 such electrodes in a single line. The locations for these grids/strips are planned by a team of neurologists and neurosurgeons, and are based on previous EEG monitoring, on a structural MRI of the patient's brain, and on relevant factors of the patient's history. Continuous recording over a period of 5-12 days serves to localize epileptic foci, and electrical stimulation via the implanted electrodes allows clinicians to map eloquent cortex. At the end of the monitoring period, explantation of the electrodes and therapeutic resection are performed together in one procedure. In addition to its primary clinical purpose, invasive monitoring also provides a unique opportunity to acquire human ECoG data for neuroscientific research. The decision to include a prospective patient in the research is based on the planned location of their electrodes, on the patient's performance scores on neuropsychological assessments, and on their informed consent, which is predicated on their understanding that participation in research is optional and is not related to their treatment. As with all research involving human subjects, the research protocol must be approved by the hospital's institutional review board. The decision to perform individual experimental tasks is made day-by-day, and is contingent on the patient's endurance and willingness to participate. Some or all of the experiments may be prevented by problems with the clinical state of the patient, such as post-operative facial swelling, temporary aphasia, frequent seizures, post-ictal fatigue and confusion, and more general pain or discomfort. At the Epilepsy Monitoring Unit at Albany Medical Center in Albany, New York, clinical monitoring is implemented around the clock using a 192-channel Nihon-Kohden Neurofax monitoring system. Research recordings are made in collaboration with the Wadsworth Center of the New York State Department of Health in Albany. Signals from the ECoG electrodes are fed simultaneously to the research and the clinical systems via splitter connectors. To ensure that the clinical and research systems do not interfere with each other, the two systems typically use separate grounds. In fact, an epidural strip of electrodes is sometimes implanted to provide a ground for the clinical system. Whether research or clinical recording system, the grounding electrode is chosen to be distant from the predicted epileptic focus and from cortical areas of interest for the research. Our research system consists of eight synchronized 16-channel g.USBamp amplifier/digitizer units (g.tec, Graz, Austria). These were chosen because they are safety-rated and FDA-approved for invasive recordings, they have a very low noise-floor in the high-frequency range in which the signals of interest are found, and they come with an SDK that allows them to be integrated with custom-written research software. In order to capture the high-gamma signal accurately, we acquire signals at 1200Hz sampling rate-considerably higher than that of the typical EEG experiment or that of many clinical monitoring systems. A built-in low-pass filter automatically prevents aliasing of signals higher than the digitizer can capture. The patient's eye gaze is tracked using a monitor with a built-in Tobii T-60 eye-tracking system (Tobii Tech., Stockholm, Sweden). Additional accessories such as joystick, bluetooth Wiimote (Nintendo Co.), data-glove (5th Dimension Technologies), keyboard, microphone, headphones, or video camera are connected depending on the requirements of the particular experiment. Data collection, stimulus presentation, synchronization with the different input/output accessories, and real-time analysis and visualization are accomplished using our BCI2000 software8,9. BCI2000 is a freely available general-purpose software system for real-time biosignal data acquisition, processing and feedback. It includes an array of pre-built modules that can be flexibly configured for many different purposes, and that can be extended by researchers' own code in C++, MATLAB or Python. BCI2000 consists of four modules that communicate with each other via a network-capable protocol: a Source module that handles the acquisition of brain signals from one of 19 different hardware systems from different manufacturers; a Signal Processing module that extracts relevant ECoG features and translates them into output signals; an Application module that delivers stimuli and feedback to the subject; and the Operator module that provides a graphical interface to the investigator. A number of different experiments may be conducted with any given patient. The priority of experiments will be determined by the location of the particular patient's electrodes. However, we usually begin our experimentation using the SIGFRIED (SIGnal modeling For Realtime Identification and Event Detection) mapping method, which detects and displays significant task-related activity in real time. The resulting functional map allows us to further tailor subsequent experimental protocols and may also prove as a useful starting point for traditional mapping by electrocortical stimulation (ECS). Although ECS mapping remains the gold standard for predicting the clinical outcome of resection, the process of ECS mapping is time consuming and also has other problems, such as after-discharges or seizures. Thus, a passive functional mapping technique may prove valuable in providing an initial estimate of the locus of eloquent cortex, which may then be confirmed and refined by ECS. The results from our passive SIGFRIED mapping technique have been shown to exhibit substantial concurrence with the results derived using ECS mapping10. The protocol described in this paper establishes a general methodology for gathering human ECoG data, before proceeding to illustrate how experiments can be initiated using the BCI2000 software platform. Finally, as a specific example, we describe how to perform passive functional mapping using the BCI2000-based SIGFRIED system.
Neuroscience, Issue 64, electrocorticography, brain-computer interfacing, functional brain mapping, SIGFRIED, BCI2000, epilepsy monitoring, magnetic resonance imaging, MRI
Play Button
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Authors: Joseph F.X. DeSouza, Shima Ovaysikia, Laura K. Pynn.
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1 and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7 which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
Play Button
Infant Auditory Processing and Event-related Brain Oscillations
Authors: Gabriella Musacchia, Silvia Ortiz-Mantilla, Teresa Realpe-Bonilla, Cynthia P. Roesler, April A. Benasich.
Institutions: Rutgers University, State University of New Jersey, Newark, University of the Pacific, Stanford University.
Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net application, recording, dynamic brain response analysis, and representative results.
Behavior, Issue 101, Infant, Infant Brain, Human Development, Auditory Development, Oscillations, Brain Oscillations, Theta, Electroencephalogram, Child Development, Event-related Potentials, Source Localization, Auditory Cortex
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.