JoVE Visualize What is visualize?
Related JoVE Video
Pubmed Article
Dating the origin of language using phonemic diversity.
Language is a key adaptation of our species, yet we do not know when it evolved. Here, we use data on language phonemic diversity to estimate a minimum date for the origin of language. We take advantage of the fact that phonemic diversity evolves slowly and use it as a clock to calculate how long the oldest African languages would have to have been around in order to accumulate the number of phonemes they possess today. We use a natural experiment, the colonization of Southeast Asia and Andaman Islands, to estimate the rate at which phonemic diversity increases through time. Using this rate, we estimate that present-day languages date back to the Middle Stone Age in Africa. Our analysis is consistent with the archaeological evidence suggesting that complex human behavior evolved during the Middle Stone Age in Africa, and does not support the view that language is a recent adaptation that has sparked the dispersal of humans out of Africa. While some of our assumptions require testing and our results rely at present on a single case-study, our analysis constitutes the first estimate of when language evolved that is directly based on linguistic data.
One of the defining characteristics of autism spectrum disorder (ASD) is difficulty with language and communication.1 Children with ASD's onset of speaking is usually delayed, and many children with ASD consistently produce language less frequently and of lower lexical and grammatical complexity than their typically developing (TD) peers.6,8,12,23 However, children with ASD also exhibit a significant social deficit, and researchers and clinicians continue to debate the extent to which the deficits in social interaction account for or contribute to the deficits in language production.5,14,19,25 Standardized assessments of language in children with ASD usually do include a comprehension component; however, many such comprehension tasks assess just one aspect of language (e.g., vocabulary),5 or include a significant motor component (e.g., pointing, act-out), and/or require children to deliberately choose between a number of alternatives. These last two behaviors are known to also be challenging to children with ASD.7,12,13,16 We present a method which can assess the language comprehension of young typically developing children (9-36 months) and children with autism.2,4,9,11,22 This method, Portable Intermodal Preferential Looking (P-IPL), projects side-by-side video images from a laptop onto a portable screen. The video images are paired first with a 'baseline' (nondirecting) audio, and then presented again paired with a 'test' linguistic audio that matches only one of the video images. Children's eye movements while watching the video are filmed and later coded. Children who understand the linguistic audio will look more quickly to, and longer at, the video that matches the linguistic audio.2,4,11,18,22,26 This paradigm includes a number of components that have recently been miniaturized (projector, camcorder, digitizer) to enable portability and easy setup in children's homes. This is a crucial point for assessing young children with ASD, who are frequently uncomfortable in new (e.g., laboratory) settings. Videos can be created to assess a wide range of specific components of linguistic knowledge, such as Subject-Verb-Object word order, wh-questions, and tense/aspect suffixes on verbs; videos can also assess principles of word learning such as a noun bias, a shape bias, and syntactic bootstrapping.10,14,17,21,24 Videos include characters and speech that are visually and acoustically salient and well tolerated by children with ASD.
20 Related JoVE Articles!
Play Button
Automated, Quantitative Cognitive/Behavioral Screening of Mice: For Genetics, Pharmacology, Animal Cognition and Undergraduate Instruction
Authors: C. R. Gallistel, Fuat Balci, David Freestone, Aaron Kheifets, Adam King.
Institutions: Rutgers University, Koç University, New York University, Fairfield University.
We describe a high-throughput, high-volume, fully automated, live-in 24/7 behavioral testing system for assessing the effects of genetic and pharmacological manipulations on basic mechanisms of cognition and learning in mice. A standard polypropylene mouse housing tub is connected through an acrylic tube to a standard commercial mouse test box. The test box has 3 hoppers, 2 of which are connected to pellet feeders. All are internally illuminable with an LED and monitored for head entries by infrared (IR) beams. Mice live in the environment, which eliminates handling during screening. They obtain their food during two or more daily feeding periods by performing in operant (instrumental) and Pavlovian (classical) protocols, for which we have written protocol-control software and quasi-real-time data analysis and graphing software. The data analysis and graphing routines are written in a MATLAB-based language created to simplify greatly the analysis of large time-stamped behavioral and physiological event records and to preserve a full data trail from raw data through all intermediate analyses to the published graphs and statistics within a single data structure. The data-analysis code harvests the data several times a day and subjects it to statistical and graphical analyses, which are automatically stored in the "cloud" and on in-lab computers. Thus, the progress of individual mice is visualized and quantified daily. The data-analysis code talks to the protocol-control code, permitting the automated advance from protocol to protocol of individual subjects. The behavioral protocols implemented are matching, autoshaping, timed hopper-switching, risk assessment in timed hopper-switching, impulsivity measurement, and the circadian anticipation of food availability. Open-source protocol-control and data-analysis code makes the addition of new protocols simple. Eight test environments fit in a 48 in x 24 in x 78 in cabinet; two such cabinets (16 environments) may be controlled by one computer.
Behavior, Issue 84, genetics, cognitive mechanisms, behavioral screening, learning, memory, timing
Play Button
Utilizing Repetitive Transcranial Magnetic Stimulation to Improve Language Function in Stroke Patients with Chronic Non-fluent Aphasia
Authors: Gabriella Garcia, Catherine Norise, Olufunsho Faseyitan, Margaret A. Naeser, Roy H. Hamilton.
Institutions: University of Pennsylvania , University of Pennsylvania , Veterans Affairs Boston Healthcare System, Boston University School of Medicine, Boston University School of Medicine.
Transcranial magnetic stimulation (TMS) has been shown to significantly improve language function in patients with non-fluent aphasia1. In this experiment, we demonstrate the administration of low-frequency repetitive TMS (rTMS) to an optimal stimulation site in the right hemisphere in patients with chronic non-fluent aphasia. A battery of standardized language measures is administered in order to assess baseline performance. Patients are subsequently randomized to either receive real rTMS or initial sham stimulation. Patients in the real stimulation undergo a site-finding phase, comprised of a series of six rTMS sessions administered over five days; stimulation is delivered to a different site in the right frontal lobe during each of these sessions. Each site-finding session consists of 600 pulses of 1 Hz rTMS, preceded and followed by a picture-naming task. By comparing the degree of transient change in naming ability elicited by stimulation of candidate sites, we are able to locate the area of optimal response for each individual patient. We then administer rTMS to this site during the treatment phase. During treatment, patients undergo a total of ten days of stimulation over the span of two weeks; each session is comprised of 20 min of 1 Hz rTMS delivered at 90% resting motor threshold. Stimulation is paired with an fMRI-naming task on the first and last days of treatment. After the treatment phase is complete, the language battery obtained at baseline is repeated two and six months following stimulation in order to identify rTMS-induced changes in performance. The fMRI-naming task is also repeated two and six months following treatment. Patients who are randomized to the sham arm of the study undergo sham site-finding, sham treatment, fMRI-naming studies, and repeat language testing two months after completing sham treatment. Sham patients then cross over into the real stimulation arm, completing real site-finding, real treatment, fMRI, and two- and six-month post-stimulation language testing.
Medicine, Issue 77, Neurobiology, Neuroscience, Anatomy, Physiology, Biomedical Engineering, Molecular Biology, Neurology, Stroke, Aphasia, Transcranial Magnetic Stimulation, TMS, language, neurorehabilitation, optimal site-finding, functional magnetic resonance imaging, fMRI, brain, stimulation, imaging, clinical techniques, clinical applications
Play Button
Exploring Cognitive Functions in Babies, Children & Adults with Near Infrared Spectroscopy
Authors: Mark H. Shalinsky, Iouila Kovelman, Melody S. Berens, Laura-Ann Petitto.
Institutions: University of Michigan, Ann Arbor, University of Toronto Scarborough.
An explosion of functional Near Infrared Spectroscopy (fNIRS) studies investigating cortical activation in relation to higher cognitive processes, such as language1,2,3,4,5,6,7,8,9,10, memory11, and attention12 is underway worldwide involving adults, children and infants 3,4,13,14,15,16,17,18,19 with typical and atypical cognition20,21,22. The contemporary challenge of using fNIRS for cognitive neuroscience is to achieve systematic analyses of data such that they are universally interpretable23,24,25,26, and thus may advance important scientific questions about the functional organization and neural systems underlying human higher cognition. Existing neuroimaging technologies have either less robust temporal or spatial resolution. Event Related Potentials and Magneto Encephalography (ERP and MEG) have excellent temporal resolution, whereas Positron Emission Tomography and functional Magnetic Resonance Imaging (PET and fMRI) have better spatial resolution. Using non-ionizing wavelengths of light in the near-infrared range (700-1000 nm), where oxy-hemoglobin is preferentially absorbed by 680 nm and deoxy-hemoglobin is preferentially absorbed by 830 nm (e.g., indeed, the very wavelengths hardwired into the fNIRS Hitachi ETG-400 system illustrated here), fNIRS is well suited for studies of higher cognition because it has both good temporal resolution (~5s) without the use of radiation and good spatial resolution (~4 cm depth), and does not require participants to be in an enclosed structure27,28. Participants cortical activity can be assessed while comfortably seated in an ordinary chair (adults, children) or even seated in mom s lap (infants). Notably, NIRS is uniquely portable (the size of a desktop computer), virtually silent, and can tolerate a participants subtle movement. This is particularly outstanding for the neural study of human language, which necessarily has as one of its key components the movement of the mouth in speech production or the hands in sign language. The way in which the hemodynamic response is localized is by an array of laser emitters and detectors. Emitters emit a known intensity of non-ionizing light while detectors detect the amount reflected back from the cortical surface. The closer together the optodes, the greater the spatial resolution, whereas the further apart the optodes, the greater depth of penetration. For the fNIRS Hitachi ETG-4000 system optimal penetration / resolution the optode array is set to 2cm. Our goal is to demonstrate our method of acquiring and analyzing fNIRS data to help standardize the field and enable different fNIRS labs worldwide to have a common background.
Neuroscience, Issue 29, infant, child, Near Infrared Spectroscopy, fNIRS, optical tomography, cognitive neuroscience, psychology, brain, developmental cognitive neuroscience, analysis
Play Button
Recording Human Electrocorticographic (ECoG) Signals for Neuroscientific Research and Real-time Functional Cortical Mapping
Authors: N. Jeremy Hill, Disha Gupta, Peter Brunner, Aysegul Gunduz, Matthew A. Adamo, Anthony Ritaccio, Gerwin Schalk.
Institutions: New York State Department of Health, Albany Medical College, Albany Medical College, Washington University, Rensselaer Polytechnic Institute, State University of New York at Albany, University of Texas at El Paso .
Neuroimaging studies of human cognitive, sensory, and motor processes are usually based on noninvasive techniques such as electroencephalography (EEG), magnetoencephalography or functional magnetic-resonance imaging. These techniques have either inherently low temporal or low spatial resolution, and suffer from low signal-to-noise ratio and/or poor high-frequency sensitivity. Thus, they are suboptimal for exploring the short-lived spatio-temporal dynamics of many of the underlying brain processes. In contrast, the invasive technique of electrocorticography (ECoG) provides brain signals that have an exceptionally high signal-to-noise ratio, less susceptibility to artifacts than EEG, and a high spatial and temporal resolution (i.e., <1 cm/<1 millisecond, respectively). ECoG involves measurement of electrical brain signals using electrodes that are implanted subdurally on the surface of the brain. Recent studies have shown that ECoG amplitudes in certain frequency bands carry substantial information about task-related activity, such as motor execution and planning1, auditory processing2 and visual-spatial attention3. Most of this information is captured in the high gamma range (around 70-110 Hz). Thus, gamma activity has been proposed as a robust and general indicator of local cortical function1-5. ECoG can also reveal functional connectivity and resolve finer task-related spatial-temporal dynamics, thereby advancing our understanding of large-scale cortical processes. It has especially proven useful for advancing brain-computer interfacing (BCI) technology for decoding a user's intentions to enhance or improve communication6 and control7. Nevertheless, human ECoG data are often hard to obtain because of the risks and limitations of the invasive procedures involved, and the need to record within the constraints of clinical settings. Still, clinical monitoring to localize epileptic foci offers a unique and valuable opportunity to collect human ECoG data. We describe our methods for collecting recording ECoG, and demonstrate how to use these signals for important real-time applications such as clinical mapping and brain-computer interfacing. Our example uses the BCI2000 software platform8,9 and the SIGFRIED10 method, an application for real-time mapping of brain functions. This procedure yields information that clinicians can subsequently use to guide the complex and laborious process of functional mapping by electrical stimulation. Prerequisites and Planning: Patients with drug-resistant partial epilepsy may be candidates for resective surgery of an epileptic focus to minimize the frequency of seizures. Prior to resection, the patients undergo monitoring using subdural electrodes for two purposes: first, to localize the epileptic focus, and second, to identify nearby critical brain areas (i.e., eloquent cortex) where resection could result in long-term functional deficits. To implant electrodes, a craniotomy is performed to open the skull. Then, electrode grids and/or strips are placed on the cortex, usually beneath the dura. A typical grid has a set of 8 x 8 platinum-iridium electrodes of 4 mm diameter (2.3 mm exposed surface) embedded in silicon with an inter-electrode distance of 1cm. A strip typically contains 4 or 6 such electrodes in a single line. The locations for these grids/strips are planned by a team of neurologists and neurosurgeons, and are based on previous EEG monitoring, on a structural MRI of the patient's brain, and on relevant factors of the patient's history. Continuous recording over a period of 5-12 days serves to localize epileptic foci, and electrical stimulation via the implanted electrodes allows clinicians to map eloquent cortex. At the end of the monitoring period, explantation of the electrodes and therapeutic resection are performed together in one procedure. In addition to its primary clinical purpose, invasive monitoring also provides a unique opportunity to acquire human ECoG data for neuroscientific research. The decision to include a prospective patient in the research is based on the planned location of their electrodes, on the patient's performance scores on neuropsychological assessments, and on their informed consent, which is predicated on their understanding that participation in research is optional and is not related to their treatment. As with all research involving human subjects, the research protocol must be approved by the hospital's institutional review board. The decision to perform individual experimental tasks is made day-by-day, and is contingent on the patient's endurance and willingness to participate. Some or all of the experiments may be prevented by problems with the clinical state of the patient, such as post-operative facial swelling, temporary aphasia, frequent seizures, post-ictal fatigue and confusion, and more general pain or discomfort. At the Epilepsy Monitoring Unit at Albany Medical Center in Albany, New York, clinical monitoring is implemented around the clock using a 192-channel Nihon-Kohden Neurofax monitoring system. Research recordings are made in collaboration with the Wadsworth Center of the New York State Department of Health in Albany. Signals from the ECoG electrodes are fed simultaneously to the research and the clinical systems via splitter connectors. To ensure that the clinical and research systems do not interfere with each other, the two systems typically use separate grounds. In fact, an epidural strip of electrodes is sometimes implanted to provide a ground for the clinical system. Whether research or clinical recording system, the grounding electrode is chosen to be distant from the predicted epileptic focus and from cortical areas of interest for the research. Our research system consists of eight synchronized 16-channel g.USBamp amplifier/digitizer units (g.tec, Graz, Austria). These were chosen because they are safety-rated and FDA-approved for invasive recordings, they have a very low noise-floor in the high-frequency range in which the signals of interest are found, and they come with an SDK that allows them to be integrated with custom-written research software. In order to capture the high-gamma signal accurately, we acquire signals at 1200Hz sampling rate-considerably higher than that of the typical EEG experiment or that of many clinical monitoring systems. A built-in low-pass filter automatically prevents aliasing of signals higher than the digitizer can capture. The patient's eye gaze is tracked using a monitor with a built-in Tobii T-60 eye-tracking system (Tobii Tech., Stockholm, Sweden). Additional accessories such as joystick, bluetooth Wiimote (Nintendo Co.), data-glove (5th Dimension Technologies), keyboard, microphone, headphones, or video camera are connected depending on the requirements of the particular experiment. Data collection, stimulus presentation, synchronization with the different input/output accessories, and real-time analysis and visualization are accomplished using our BCI2000 software8,9. BCI2000 is a freely available general-purpose software system for real-time biosignal data acquisition, processing and feedback. It includes an array of pre-built modules that can be flexibly configured for many different purposes, and that can be extended by researchers' own code in C++, MATLAB or Python. BCI2000 consists of four modules that communicate with each other via a network-capable protocol: a Source module that handles the acquisition of brain signals from one of 19 different hardware systems from different manufacturers; a Signal Processing module that extracts relevant ECoG features and translates them into output signals; an Application module that delivers stimuli and feedback to the subject; and the Operator module that provides a graphical interface to the investigator. A number of different experiments may be conducted with any given patient. The priority of experiments will be determined by the location of the particular patient's electrodes. However, we usually begin our experimentation using the SIGFRIED (SIGnal modeling For Realtime Identification and Event Detection) mapping method, which detects and displays significant task-related activity in real time. The resulting functional map allows us to further tailor subsequent experimental protocols and may also prove as a useful starting point for traditional mapping by electrocortical stimulation (ECS). Although ECS mapping remains the gold standard for predicting the clinical outcome of resection, the process of ECS mapping is time consuming and also has other problems, such as after-discharges or seizures. Thus, a passive functional mapping technique may prove valuable in providing an initial estimate of the locus of eloquent cortex, which may then be confirmed and refined by ECS. The results from our passive SIGFRIED mapping technique have been shown to exhibit substantial concurrence with the results derived using ECS mapping10. The protocol described in this paper establishes a general methodology for gathering human ECoG data, before proceeding to illustrate how experiments can be initiated using the BCI2000 software platform. Finally, as a specific example, we describe how to perform passive functional mapping using the BCI2000-based SIGFRIED system.
Neuroscience, Issue 64, electrocorticography, brain-computer interfacing, functional brain mapping, SIGFRIED, BCI2000, epilepsy monitoring, magnetic resonance imaging, MRI
Play Button
Measurement Of Neuromagnetic Brain Function In Pre-school Children With Custom Sized MEG
Authors: Graciela Tesan, Blake W. Johnson, Melanie Reid, Rosalind Thornton, Stephen Crain.
Institutions: Macquarie University.
Magnetoencephalography is a technique that detects magnetic fields associated with cortical activity [1]. The electrophysiological activity of the brain generates electric fields - that can be recorded using electroencephalography (EEG)- and their concomitant magnetic fields - detected by MEG. MEG signals are detected by specialized sensors known as superconducting quantum interference devices (SQUIDs). Superconducting sensors require cooling with liquid helium at -270 °C. They are contained inside a vacumm-insulated helmet called a dewar, which is filled with liquid. SQUIDS are placed in fixed positions inside the helmet dewar in the helium coolant, and a subject's head is placed inside the helmet dewar for MEG measurements. The helmet dewar must be sized to satisfy opposing constraints. Clearly, it must be large enough to fit most or all of the heads in the population that will be studied. However, the helmet must also be small enough to keep most of the SQUID sensors within range of the tiny cerebral fields that they are to measure. Conventional whole-head MEG systems are designed to accommodate more than 90% of adult heads. However adult systems are not well suited for measuring brain function in pre-school chidren whose heads have a radius several cm smaller than adults. The KIT-Macquarie Brain Research Laboratory at Macquarie University uses a MEG system custom sized to fit the heads of pre-school children. This child system has 64 first-order axial gradiometers with a 50 mm baseline[2] and is contained inside a magnetically-shielded room (MSR) together with a conventional adult-sized MEG system [3,4]. There are three main advantages of the customized helmet dewar for studying children. First, the smaller radius of the sensor configuration brings the SQUID sensors into range of the neuromagnetic signals of children's heads. Second, the smaller helmet allows full insertion of a child's head into the dewar. Full insertion is prevented in adult dewar helmets because of the smaller crown to shoulder distance in children. These two factors are fundamental in recording brain activity using MEG because neuromagnetic signals attenuate rapidly with distance. Third, the customized child helmet aids in the symmetric positioning of the head and limits the freedom of movement of the child's head within the dewar. When used with a protocol that aligns the requirements of data collection with the motivational and behavioral capacities of children, these features significantly facilitate setup, positioning, and measurement of MEG signals.
Neuroscience, Issue 36, Magnetoencephalography, Pediatrics, Brain Mapping, Language, Brain Development, Cognitive Neuroscience, Language Acquisition, Linguistics
Play Button
Experimental Protocol for Manipulating Plant-induced Soil Heterogeneity
Authors: Angela J. Brandt, Gaston A. del Pino, Jean H. Burns.
Institutions: Case Western Reserve University.
Coexistence theory has often treated environmental heterogeneity as being independent of the community composition; however biotic feedbacks such as plant-soil feedbacks (PSF) have large effects on plant performance, and create environmental heterogeneity that depends on the community composition. Understanding the importance of PSF for plant community assembly necessitates understanding of the role of heterogeneity in PSF, in addition to mean PSF effects. Here, we describe a protocol for manipulating plant-induced soil heterogeneity. Two example experiments are presented: (1) a field experiment with a 6-patch grid of soils to measure plant population responses and (2) a greenhouse experiment with 2-patch soils to measure individual plant responses. Soils can be collected from the zone of root influence (soils from the rhizosphere and directly adjacent to the rhizosphere) of plants in the field from conspecific and heterospecific plant species. Replicate collections are used to avoid pseudoreplicating soil samples. These soils are then placed into separate patches for heterogeneous treatments or mixed for a homogenized treatment. Care should be taken to ensure that heterogeneous and homogenized treatments experience the same degree of soil disturbance. Plants can then be placed in these soil treatments to determine the effect of plant-induced soil heterogeneity on plant performance. We demonstrate that plant-induced heterogeneity results in different outcomes than predicted by traditional coexistence models, perhaps because of the dynamic nature of these feedbacks. Theory that incorporates environmental heterogeneity influenced by the assembling community and additional empirical work is needed to determine when heterogeneity intrinsic to the assembling community will result in different assembly outcomes compared with heterogeneity extrinsic to the community composition.
Environmental Sciences, Issue 85, Coexistence, community assembly, environmental drivers, plant-soil feedback, soil heterogeneity, soil microbial communities, soil patch
Play Button
Isolation and Chemical Characterization of Lipid A from Gram-negative Bacteria
Authors: Jeremy C. Henderson, John P. O'Brien, Jennifer S. Brodbelt, M. Stephen Trent.
Institutions: The University of Texas at Austin, The University of Texas at Austin, The University of Texas at Austin.
Lipopolysaccharide (LPS) is the major cell surface molecule of gram-negative bacteria, deposited on the outer leaflet of the outer membrane bilayer. LPS can be subdivided into three domains: the distal O-polysaccharide, a core oligosaccharide, and the lipid A domain consisting of a lipid A molecular species and 3-deoxy-D-manno-oct-2-ulosonic acid residues (Kdo). The lipid A domain is the only component essential for bacterial cell survival. Following its synthesis, lipid A is chemically modified in response to environmental stresses such as pH or temperature, to promote resistance to antibiotic compounds, and to evade recognition by mediators of the host innate immune response. The following protocol details the small- and large-scale isolation of lipid A from gram-negative bacteria. Isolated material is then chemically characterized by thin layer chromatography (TLC) or mass-spectrometry (MS). In addition to matrix-assisted laser desorption/ionization-time of flight (MALDI-TOF) MS, we also describe tandem MS protocols for analyzing lipid A molecular species using electrospray ionization (ESI) coupled to collision induced dissociation (CID) and newly employed ultraviolet photodissociation (UVPD) methods. Our MS protocols allow for unequivocal determination of chemical structure, paramount to characterization of lipid A molecules that contain unique or novel chemical modifications. We also describe the radioisotopic labeling, and subsequent isolation, of lipid A from bacterial cells for analysis by TLC. Relative to MS-based protocols, TLC provides a more economical and rapid characterization method, but cannot be used to unambiguously assign lipid A chemical structures without the use of standards of known chemical structure. Over the last two decades isolation and characterization of lipid A has led to numerous exciting discoveries that have improved our understanding of the physiology of gram-negative bacteria, mechanisms of antibiotic resistance, the human innate immune response, and have provided many new targets in the development of antibacterial compounds.
Chemistry, Issue 79, Membrane Lipids, Toll-Like Receptors, Endotoxins, Glycolipids, Lipopolysaccharides, Lipid A, Microbiology, Lipids, lipid A, Bligh-Dyer, thin layer chromatography (TLC), lipopolysaccharide, mass spectrometry, Collision Induced Dissociation (CID), Photodissociation (PD)
Play Button
Developing Neuroimaging Phenotypes of the Default Mode Network in PTSD: Integrating the Resting State, Working Memory, and Structural Connectivity
Authors: Noah S. Philip, S. Louisa Carpenter, Lawrence H. Sweet.
Institutions: Alpert Medical School, Brown University, University of Georgia.
Complementary structural and functional neuroimaging techniques used to examine the Default Mode Network (DMN) could potentially improve assessments of psychiatric illness severity and provide added validity to the clinical diagnostic process. Recent neuroimaging research suggests that DMN processes may be disrupted in a number of stress-related psychiatric illnesses, such as posttraumatic stress disorder (PTSD). Although specific DMN functions remain under investigation, it is generally thought to be involved in introspection and self-processing. In healthy individuals it exhibits greatest activity during periods of rest, with less activity, observed as deactivation, during cognitive tasks, e.g., working memory. This network consists of the medial prefrontal cortex, posterior cingulate cortex/precuneus, lateral parietal cortices and medial temporal regions. Multiple functional and structural imaging approaches have been developed to study the DMN. These have unprecedented potential to further the understanding of the function and dysfunction of this network. Functional approaches, such as the evaluation of resting state connectivity and task-induced deactivation, have excellent potential to identify targeted neurocognitive and neuroaffective (functional) diagnostic markers and may indicate illness severity and prognosis with increased accuracy or specificity. Structural approaches, such as evaluation of morphometry and connectivity, may provide unique markers of etiology and long-term outcomes. Combined, functional and structural methods provide strong multimodal, complementary and synergistic approaches to develop valid DMN-based imaging phenotypes in stress-related psychiatric conditions. This protocol aims to integrate these methods to investigate DMN structure and function in PTSD, relating findings to illness severity and relevant clinical factors.
Medicine, Issue 89, default mode network, neuroimaging, functional magnetic resonance imaging, diffusion tensor imaging, structural connectivity, functional connectivity, posttraumatic stress disorder
Play Button
Assessment of Cerebral Lateralization in Children using Functional Transcranial Doppler Ultrasound (fTCD)
Authors: Dorothy V. M. Bishop, Nicholas A. Badcock, Georgina Holt.
Institutions: University of Oxford.
There are many unanswered questions about cerebral lateralization. In particular, it remains unclear which aspects of language and nonverbal ability are lateralized, whether there are any disadvantages associated with atypical patterns of cerebral lateralization, and whether cerebral lateralization develops with age. In the past, researchers interested in these questions tended to use handedness as a proxy measure for cerebral lateralization, but this is unsatisfactory because handedness is only a weak and indirect indicator of laterality of cognitive functions1. Other methods, such as fMRI, are expensive for large-scale studies, and not always feasible with children2. Here we will describe the use of functional transcranial Doppler ultrasound (fTCD) as a cost-effective, non-invasive and reliable method for assessing cerebral lateralization. The procedure involves measuring blood flow in the middle cerebral artery via an ultrasound probe placed just in front of the ear. Our work builds on work by Rune Aaslid, who co-introduced TCD in 1982, and Stefan Knecht, Michael Deppe and their colleagues at the University of Münster, who pioneered the use of simultaneous measurements of left- and right middle cerebral artery blood flow, and devised a method of correcting for heart beat activity. This made it possible to see a clear increase in left-sided blood flow during language generation, with lateralization agreeing well with that obtained using other methods3. The middle cerebral artery has a very wide vascular territory (see Figure 1) and the method does not provide useful information about localization within a hemisphere. Our experience suggests it is particularly sensitive to tasks that involve explicit or implicit speech production. The 'gold standard' task is a word generation task (e.g. think of as many words as you can that begin with the letter 'B') 4, but this is not suitable for young children and others with limited literacy skills. Compared with other brain imaging methods, fTCD is relatively unaffected by movement artefacts from speaking, and so we are able to get a reliable result from tasks that involve describing pictures aloud5,6. Accordingly, we have developed a child-friendly task that involves looking at video-clips that tell a story, and then describing what was seen.
Neuroscience, Issue 43, functional transcranial Doppler ultrasound, cerebral lateralization, language, child
Play Button
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Authors: Marcus Cheetham, Lutz Jancke.
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Play Button
Cortical Source Analysis of High-Density EEG Recordings in Children
Authors: Joe Bathelt, Helen O'Reilly, Michelle de Haan.
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2, because the composition and spatial configuration of head tissues changes dramatically over development3.  In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis. 
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials 
Play Button
Transcranial Direct Current Stimulation and Simultaneous Functional Magnetic Resonance Imaging
Authors: Marcus Meinzer, Robert Lindenberg, Robert Darkow, Lena Ulm, David Copland, Agnes Flöel.
Institutions: University of Queensland, Charité Universitätsmedizin.
Transcranial direct current stimulation (tDCS) is a noninvasive brain stimulation technique that uses weak electrical currents administered to the scalp to manipulate cortical excitability and, consequently, behavior and brain function. In the last decade, numerous studies have addressed short-term and long-term effects of tDCS on different measures of behavioral performance during motor and cognitive tasks, both in healthy individuals and in a number of different patient populations. So far, however, little is known about the neural underpinnings of tDCS-action in humans with regard to large-scale brain networks. This issue can be addressed by combining tDCS with functional brain imaging techniques like functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). In particular, fMRI is the most widely used brain imaging technique to investigate the neural mechanisms underlying cognition and motor functions. Application of tDCS during fMRI allows analysis of the neural mechanisms underlying behavioral tDCS effects with high spatial resolution across the entire brain. Recent studies using this technique identified stimulation induced changes in task-related functional brain activity at the stimulation site and also in more distant brain regions, which were associated with behavioral improvement. In addition, tDCS administered during resting-state fMRI allowed identification of widespread changes in whole brain functional connectivity. Future studies using this combined protocol should yield new insights into the mechanisms of tDCS action in health and disease and new options for more targeted application of tDCS in research and clinical settings. The present manuscript describes this novel technique in a step-by-step fashion, with a focus on technical aspects of tDCS administered during fMRI.
Behavior, Issue 86, noninvasive brain stimulation, transcranial direct current stimulation (tDCS), anodal stimulation (atDCS), cathodal stimulation (ctDCS), neuromodulation, task-related fMRI, resting-state fMRI, functional magnetic resonance imaging (fMRI), electroencephalography (EEG), inferior frontal gyrus (IFG)
Play Button
Investigating Protein-protein Interactions in Live Cells Using Bioluminescence Resonance Energy Transfer
Authors: Pelagia Deriziotis, Sarah A. Graham, Sara B. Estruch, Simon E. Fisher.
Institutions: Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour.
Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a 'donor' luciferase enzyme to an 'acceptor' fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.
Cellular Biology, Issue 87, Protein-protein interactions, Bioluminescence Resonance Energy Transfer, Live cell, Transfection, Luciferase, Yellow Fluorescent Protein, Mutations
Play Button
Making Sense of Listening: The IMAP Test Battery
Authors: Johanna G. Barry, Melanie A. Ferguson, David R. Moore.
Institutions: MRC Institute of Hearing Research, National Biomedical Research Unit in Hearing.
The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale studies both in the UK and the USA. IMAP provides measures for separating out sensory from cognitive contributions to hearing. It further limits confounds due to procedural effects by presenting tests in a child-friendly game-format. Stimulus-generation, management of test protocols and control of test presentation is mediated by the IHR-STAR software platform. This provides a standardized methodology for a range of applications and ensures replicable procedures across testers. IHR-STAR provides a flexible, user-programmable environment that currently has additional applications for hearing screening, mapping cochlear implant electrodes, and academic research or teaching.
Neuroscience, Issue 44, Listening skills, auditory processing, auditory psychophysics, clinical assessment, child-friendly testing
Play Button
Mutagenesis and Functional Selection Protocols for Directed Evolution of Proteins in E. coli
Authors: Chris Troll, David Alexander, Jennifer Allen, Jacob Marquette, Manel Camps.
Institutions: University of California Santa Cruz - UCSC.
The efficient generation of genetic diversity represents an invaluable molecular tool that can be used to label DNA synthesis, to create unique molecular signatures, or to evolve proteins in the laboratory. Here, we present a protocol that allows the generation of large (>1011) mutant libraries for a given target sequence. This method is based on replication of a ColE1 plasmid encoding the desired sequence by a low-fidelity variant of DNA polymerase I (LF-Pol I). The target plasmid is transformed into a mutator strain of E. coli and plated on solid media, yielding between 0.2 and 1 mutations/kb, depending on the location of the target gene. Higher mutation frequencies are achieved by iterating this process of mutagenesis. Compared to alternative methods of mutagenesis, our protocol stands out for its simplicity, as no cloning or PCR are involved. Thus, our method is ideal for mutational labeling of plasmids or other Pol I templates or to explore large sections of sequence space for the evolution of activities not present in the original target. The tight spatial control that PCR or randomized oligonucleotide-based methods offer can also be achieved through subsequent cloning of specific sections of the library. Here we provide protocols showing how to create a random mutant library and how to establish drug-based selections in E. coli to identify mutants exhibiting new biochemical activities.
Genetics, Issue 49, random mutagenesis, directed evolution, LB agar drug gradient, bacterial complementation, ColE1 plasmid, DNA polymerase I, replication fidelity, genetic adaptation, antimicrobials, methylating agents
Play Button
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Authors: Justen Manasa, Siva Danaviah, Sureshnee Pillay, Prevashinee Padayachee, Hloniphile Mthiyane, Charity Mkhize, Richard John Lessells, Christopher Seebregts, Tobias F. Rinke de Wit, Johannes Viljoen, David Katzenstein, Tulio De Oliveira.
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
Play Button
Isolation of Fidelity Variants of RNA Viruses and Characterization of Virus Mutation Frequency
Authors: Stéphanie Beaucourt, Antonio V. Bordería, Lark L. Coffey, Nina F. Gnädig, Marta Sanz-Ramos, Yasnee Beeharry, Marco Vignuzzi.
Institutions: Institut Pasteur .
RNA viruses use RNA dependent RNA polymerases to replicate their genomes. The intrinsically high error rate of these enzymes is a large contributor to the generation of extreme population diversity that facilitates virus adaptation and evolution. Increasing evidence shows that the intrinsic error rates, and the resulting mutation frequencies, of RNA viruses can be modulated by subtle amino acid changes to the viral polymerase. Although biochemical assays exist for some viral RNA polymerases that permit quantitative measure of incorporation fidelity, here we describe a simple method of measuring mutation frequencies of RNA viruses that has proven to be as accurate as biochemical approaches in identifying fidelity altering mutations. The approach uses conventional virological and sequencing techniques that can be performed in most biology laboratories. Based on our experience with a number of different viruses, we have identified the key steps that must be optimized to increase the likelihood of isolating fidelity variants and generating data of statistical significance. The isolation and characterization of fidelity altering mutations can provide new insights into polymerase structure and function1-3. Furthermore, these fidelity variants can be useful tools in characterizing mechanisms of virus adaptation and evolution4-7.
Immunology, Issue 52, Polymerase fidelity, RNA virus, mutation frequency, mutagen, RNA polymerase, viral evolution
Play Button
Layers of Symbiosis - Visualizing the Termite Hindgut Microbial Community
Authors: Jared Leadbetter.
Institutions: California Institute of Technology - Caltech.
Jared Leadbetter takes us for a nature walk through the diversity of life resident in the termite hindgut - a microenvironment containing 250 different species found nowhere else on Earth. Jared reveals that the symbiosis exhibited by this system is multi-layered and involves not only a relationship between the termite and its gut inhabitants, but also involves a complex web of symbiosis among the gut microbes themselves.
Microbiology, issue 4, microbial community, symbiosis, hindgut
Play Button
Characterizing Herbivore Resistance Mechanisms: Spittlebugs on Brachiaria spp. as an Example
Authors: Soroush Parsa, Guillermo Sotelo, Cesar Cardona.
Institutions: CIAT.
Plants can resist herbivore damage through three broad mechanisms: antixenosis, antibiosis and tolerance1. Antixenosis is the degree to which the plant is avoided when the herbivore is able to select other plants2. Antibiosis is the degree to which the plant affects the fitness of the herbivore feeding on it1.Tolerance is the degree to which the plant can withstand or repair damage caused by the herbivore, without compromising the herbivore's growth and reproduction1. The durability of herbivore resistance in an agricultural setting depends to a great extent on the resistance mechanism favored during crop breeding efforts3. We demonstrate a no-choice experiment designed to estimate the relative contributions of antibiosis and tolerance to spittlebug resistance in Brachiaria spp. Several species of African grasses of the genus Brachiaria are valuable forage and pasture plants in the Neotropics, but they can be severely challenged by several native species of spittlebugs (Hemiptera: Cercopidae)4.To assess their resistance to spittlebugs, plants are vegetatively-propagated by stem cuttings and allowed to grow for approximately one month, allowing the growth of superficial roots on which spittlebugs can feed. At that point, each test plant is individually challenged with six spittlebug eggs near hatching. Infestations are allowed to progress for one month before evaluating plant damage and insect survival. Scoring plant damage provides an estimate of tolerance while scoring insect survival provides an estimate of antibiosis. This protocol has facilitated our plant breeding objective to enhance spittlebug resistance in commercial brachiariagrases5.
Plant Biology, Issue 52, host plant resistance, antibiosis, antixenosis, tolerance, Brachiaria, spittlebugs
Copyright © JoVE 2006-2015. All Rights Reserved.
Policies | License Agreement | ISSN 1940-087X
simple hit counter

What is Visualize?

JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.

How does it work?

We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.

Video X seems to be unrelated to Abstract Y...

In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.