Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
20 Related JoVE Articles!
How to Create and Use Binocular Rivalry
Institutions: New York University, New York University, Princeton University, Princeton University.
Each of our eyes normally sees a slightly different image of the world around us. The brain can combine these two images into a single coherent representation. However, when the eyes are presented with images that are sufficiently different from each other, an interesting thing happens: Rather than fusing the two images into a combined conscious percept, what transpires is a pattern of perceptual alternations where one image dominates awareness while the other is suppressed; dominance alternates between the two images, typically every few seconds. This perceptual phenomenon is known as binocular rivalry. Binocular rivalry is considered useful for studying perceptual selection and awareness in both human and animal models, because unchanging visual input to each eye leads to alternations in visual awareness and perception. To create a binocular rivalry stimulus, all that is necessary is to present each eye with a different image at the same perceived location. There are several ways of doing this, but newcomers to the field are often unsure which method would best suit their specific needs. The purpose of this article is to describe a number of inexpensive and straightforward ways to create and use binocular rivalry. We detail methods that do not require expensive specialized equipment and describe each method's advantages and disadvantages. The methods described include the use of red-blue goggles, mirror stereoscopes and prism goggles.
Neuroscience, Issue 45, Binocular rivalry, continuous flash suppression, vision, visual awareness, perceptual competition, unconscious processing, neuroimaging
Measuring Sensitivity to Viewpoint Change with and without Stereoscopic Cues
Institutions: Australian National University, University of Western Australia, McGill University.
The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.
Behavior, Issue 82, stereo, curvature, shape, viewpoint, 3D, object recognition, rotation-in-depth (RID)
The Crossmodal Congruency Task as a Means to Obtain an Objective Behavioral Measure in the Rubber Hand Illusion Paradigm
Institutions: Macquarie University, Macquarie University, Macquarie University.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one's own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one's own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm.
The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e.
same finger) on some trials and incongruent (i.e.
different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI.
The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.
Behavior, Issue 77, Neuroscience, Neurobiology, Medicine, Anatomy, Physiology, Psychology, Behavior and Behavior Mechanisms, Psychological Phenomena and Processes, Behavioral Sciences, rubber hand illusion, crossmodal congruency task, crossmodal congruency effect, multisensory processing, body ownership, peripersonal space, clinical techniques
MPI CyberMotion Simulator: Implementation of a Novel Motion Simulator to Investigate Multisensory Path Integration in Three Dimensions
Institutions: Max Planck Institute for Biological Cybernetics, Collège de France - CNRS, Korea University.
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point 1
. Humans can do path integration based exclusively on visual 2-3
, auditory 4
, or inertial cues 5
. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate 6-7
. In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones 5
. Movement through physical space therefore does not seem to be accurately represented by the brain.
Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see 3
for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator 8-9
with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed.
16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s2
peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen.
Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Neuroscience, Issue 63, Motion simulator, multisensory integration, path integration, space perception, vestibular, vision, robotics, cybernetics
Combining Computer Game-Based Behavioural Experiments With High-Density EEG and Infrared Gaze Tracking
Institutions: Cornell University, University of Chicago, Manesar, India.
Experimental paradigms are valuable insofar as the timing and other parameters of their stimuli are well specified and controlled, and insofar as they yield data relevant to the cognitive processing that occurs under ecologically valid conditions. These two goals often are at odds, since well controlled stimuli often are too repetitive to sustain subjects' motivation. Studies employing electroencephalography (EEG) are often especially sensitive to this dilemma between ecological validity and experimental control: attaining sufficient signal-to-noise in physiological averages demands large numbers of repeated trials within lengthy recording sessions, limiting the subject pool to individuals with the ability and patience to perform a set task over and over again. This constraint severely limits researchers' ability to investigate younger populations as well as clinical populations associated with heightened anxiety or attentional abnormalities. Even adult, non-clinical subjects may not be able to achieve their typical levels of performance or cognitive engagement: an unmotivated subject for whom an experimental task is little more than a chore is not the same, behaviourally, cognitively, or neurally, as a subject who is intrinsically motivated and engaged with the task. A growing body of literature demonstrates that embedding experiments within video games may provide a way between the horns of this dilemma between experimental control and ecological validity. The narrative of a game provides a more realistic context in which tasks occur, enhancing their ecological validity (Chaytor & Schmitter-Edgecombe, 2003). Moreover, this context provides motivation to complete tasks. In our game, subjects perform various missions to collect resources, fend off pirates, intercept communications or facilitate diplomatic relations. In so doing, they also perform an array of cognitive tasks, including a Posner attention-shifting paradigm (Posner, 1980), a go/no-go test of motor inhibition, a psychophysical motion coherence threshold task, the Embedded Figures Test (Witkin, 1950, 1954) and a theory-of-mind (Wimmer & Perner, 1983) task. The game software automatically registers game stimuli and subjects' actions and responses in a log file, and sends event codes to synchronise with physiological data recorders. Thus the game can be combined with physiological measures such as EEG or fMRI, and with moment-to-moment tracking of gaze. Gaze tracking can verify subjects' compliance with behavioural tasks (e.g. fixation) and overt attention to experimental stimuli, and also physiological arousal as reflected in pupil dilation (Bradley et al.
, 2008). At great enough sampling frequencies, gaze tracking may also help assess covert attention as reflected in microsaccades - eye movements that are too small to foveate a new object, but are as rapid in onset and have the same relationship between angular distance and peak velocity as do saccades that traverse greater distances. The distribution of directions of microsaccades correlates with the (otherwise) covert direction of attention (Hafed & Clark, 2002).
Neuroscience, Issue 46, High-density EEG, ERP, ICA, gaze tracking, computer game, ecological validity
Measuring Oral Fatty Acid Thresholds, Fat Perception, Fatty Food Liking, and Papillae Density in Humans
Institutions: Deakin University.
Emerging evidence from a number of laboratories indicates that humans have the ability to identify fatty acids in the oral cavity, presumably via fatty acid receptors housed on taste cells. Previous research has shown that an individual's oral sensitivity to fatty acid, specifically oleic acid (C18:1) is associated with body mass index (BMI), dietary fat consumption, and the ability to identify fat in foods. We have developed a reliable and reproducible method to assess oral chemoreception of fatty acids, using a milk and C18:1 emulsion, together with an ascending forced choice triangle procedure. In parallel, a food matrix has been developed to assess an individual's ability to perceive fat, in addition to a simple method to assess fatty food liking. As an added measure tongue photography is used to assess papillae density, with higher density often being associated with increased taste sensitivity.
Neuroscience, Issue 88, taste, overweight and obesity, dietary fat, fatty acid, diet, fatty food liking, detection threshold
Transcranial Magnetic Stimulation for Investigating Causal Brain-behavioral Relationships and their Time Course
Institutions: University College London.
Transcranial magnetic stimulation (TMS) is a safe, non-invasive brain stimulation technique that uses a strong electromagnet in order to temporarily disrupt information processing in a brain region, generating a short-lived “virtual lesion.” Stimulation that interferes with task performance indicates that the affected brain region is necessary to perform the task normally. In other words, unlike neuroimaging methods such as functional magnetic resonance imaging (fMRI) that indicate correlations between brain and behavior, TMS can be used to demonstrate causal brain-behavior relations. Furthermore, by varying the duration and onset of the virtual lesion, TMS can also reveal the time course of normal processing. As a result, TMS has become an important tool in cognitive neuroscience. Advantages of the technique over lesion-deficit studies include better spatial-temporal precision of the disruption effect, the ability to use participants as their own control subjects, and the accessibility of participants. Limitations include concurrent auditory and somatosensory stimulation that may influence task performance, limited access to structures more than a few centimeters from the surface of the scalp, and the relatively large space of free parameters that need to be optimized in order for the experiment to work. Experimental designs that give careful consideration to appropriate control conditions help to address these concerns. This article illustrates these issues with TMS results that investigate the spatial and temporal contributions of the left supramarginal gyrus (SMG) to reading.
Behavior, Issue 89,
Transcranial magnetic stimulation, virtual lesion, chronometric, cognition, brain, behavior
A Neuroscientific Approach to the Examination of Concussions in Student-Athletes
Institutions: Elon University, Elon University, Duquesne University, Elon University.
Concussions are occurring at alarming rates in the United States and have become a serious public health concern. The CDC estimates that 1.6 to 3.8 million concussions occur in sports and recreational activities annually. Concussion as defined by the 2013 Concussion Consensus Statement “may be caused either by a direct blow to the head, face, neck or elsewhere on the body with an ‘impulsive’ force transmitted to the head.” Concussions leave the individual with both short- and long-term effects. The short-term effects of sport related concussions may include changes in playing ability, confusion, memory disturbance, the loss of consciousness, slowing of reaction time, loss of coordination, headaches, dizziness, vomiting, changes in sleep patterns and mood changes. These symptoms typically resolve in a matter of days. However, while some individuals recover from a single concussion rather quickly, many experience lingering effects that can last for weeks or months. The factors related to concussion susceptibility and the subsequent recovery times are not well known or understood at this time. Several factors have been suggested and they include the individual’s concussion history, the severity of the initial injury, history of migraines, history of learning disabilities, history of psychiatric comorbidities, and possibly, genetic factors. Many studies have individually investigated certain factors both the short-term and long-term effects of concussions, recovery time course, susceptibility and recovery. What has not been clearly established is an effective multifaceted approach to concussion evaluation that would yield valuable information related to the etiology, functional changes, and recovery. The purpose of this manuscript is to show one such multifaceted approached which examines concussions using computerized neurocognitive testing, event related potentials, somatosensory perceptual responses, balance assessment, gait assessment and genetic testing.
Medicine, Issue 94, Concussions, Student-Athletes, Mild Traumatic Brain Injury, Genetics, Cognitive Function, Balance, Gait, Somatosensory
Correlating Behavioral Responses to fMRI Signals from Human Prefrontal Cortex: Examining Cognitive Processes Using Task Analysis
Institutions: Centre for Vision Research, York University, Centre for Vision Research, York University.
The aim of this methods paper is to describe how to implement a neuroimaging technique to examine complementary brain processes engaged by two similar tasks. Participants' behavior during task performance in an fMRI scanner can then be correlated to the brain activity using the blood-oxygen-level-dependent signal. We measure behavior to be able to sort correct trials, where the subject performed the task correctly and then be able to examine the brain signals related to correct performance. Conversely, if subjects do not perform the task correctly, and these trials are included in the same analysis with the correct trials we would introduce trials that were not only for correct performance. Thus, in many cases these errors can be used themselves to then correlate brain activity to them. We describe two complementary tasks that are used in our lab to examine the brain during suppression of an automatic responses: the stroop1
and anti-saccade tasks. The emotional stroop paradigm instructs participants to either report the superimposed emotional 'word' across the affective faces or the facial 'expressions' of the face stimuli1,2
. When the word and the facial expression refer to different emotions, a conflict between what must be said and what is automatically read occurs. The participant has to resolve the conflict between two simultaneously competing processes of word reading and facial expression. Our urge to read out a word leads to strong 'stimulus-response (SR)' associations; hence inhibiting these strong SR's is difficult and participants are prone to making errors. Overcoming this conflict and directing attention away from the face or the word requires the subject to inhibit bottom up processes which typically directs attention to the more salient stimulus. Similarly, in the anti-saccade task3,4,5,6
, where an instruction cue is used to direct only attention to a peripheral stimulus location but then the eye movement is made to the mirror opposite position. Yet again we measure behavior by recording the eye movements of participants which allows for the sorting of the behavioral responses into correct and error trials7
which then can be correlated to brain activity. Neuroimaging now allows researchers to measure different behaviors of correct and error trials that are indicative of different cognitive processes and pinpoint the different neural networks involved.
Neuroscience, Issue 64, fMRI, eyetracking, BOLD, attention, inhibition, Magnetic Resonance Imaging, MRI
Cortical Source Analysis of High-Density EEG Recordings in Children
Institutions: UCL Institute of Child Health, University College London.
EEG is traditionally described as a neuroimaging technique with high temporal and low spatial resolution. Recent advances in biophysical modelling and signal processing make it possible to exploit information from other imaging modalities like structural MRI that provide high spatial resolution to overcome this constraint1
. This is especially useful for investigations that require high resolution in the temporal as well as spatial domain. In addition, due to the easy application and low cost of EEG recordings, EEG is often the method of choice when working with populations, such as young children, that do not tolerate functional MRI scans well. However, in order to investigate which neural substrates are involved, anatomical information from structural MRI is still needed. Most EEG analysis packages work with standard head models that are based on adult anatomy. The accuracy of these models when used for children is limited2
, because the composition and spatial configuration of head tissues changes dramatically over development3
In the present paper, we provide an overview of our recent work in utilizing head models based on individual structural MRI scans or age specific head models to reconstruct the cortical generators of high density EEG. This article describes how EEG recordings are acquired, processed, and analyzed with pediatric populations at the London Baby Lab, including laboratory setup, task design, EEG preprocessing, MRI processing, and EEG channel level and source analysis.
Behavior, Issue 88, EEG, electroencephalogram, development, source analysis, pediatric, minimum-norm estimation, cognitive neuroscience, event-related potentials
From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data
Institutions: Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Berkeley National Laboratory.
Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g.
, signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation.
The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets.
Bioengineering, Issue 90, 3D electron microscopy, feature extraction, segmentation, image analysis, reconstruction, manual tracing, thresholding
Measuring Attentional Biases for Threat in Children and Adults
Institutions: Rutgers University.
Investigators have long been interested in the human propensity for the rapid detection of threatening stimuli. However, until recently, research in this domain has focused almost exclusively on adult participants, completely ignoring the topic of threat detection over the course of development. One of the biggest reasons for the lack of developmental work in this area is likely the absence of a reliable paradigm that can measure perceptual biases for threat in children. To address this issue, we recently designed a modified visual search paradigm similar to the standard adult paradigm that is appropriate for studying threat detection in preschool-aged participants. Here we describe this new procedure. In the general paradigm, we present participants with matrices of color photographs, and ask them to find and touch a target on the screen. Latency to touch the target is recorded. Using a touch-screen monitor makes the procedure simple and easy, allowing us to collect data in participants ranging from 3 years of age to adults. Thus far, the paradigm has consistently shown that both adults and children detect threatening stimuli (e.g.,
snakes, spiders, angry/fearful faces) more quickly than neutral stimuli (e.g.,
flowers, mushrooms, happy/neutral faces). Altogether, this procedure provides an important new tool for researchers interested in studying the development of attentional biases for threat.
Behavior, Issue 92, Detection, threat, attention, attentional bias, anxiety, visual search
Eye Tracking, Cortisol, and a Sleep vs. Wake Consolidation Delay: Combining Methods to Uncover an Interactive Effect of Sleep and Cortisol on Memory
Institutions: Boston College, Wofford College, University of Notre Dame.
Although rises in cortisol can benefit memory consolidation, as can sleep soon after encoding, there is currently a paucity of literature as to how these two factors may interact to influence consolidation. Here we present a protocol to examine the interactive influence of cortisol and sleep on memory consolidation, by combining three methods: eye tracking, salivary cortisol analysis, and behavioral memory testing across sleep and wake delays. To assess resting cortisol levels, participants gave a saliva sample before viewing negative and neutral objects within scenes. To measure overt attention, participants’ eye gaze was tracked during encoding. To manipulate whether sleep occurred during the consolidation window, participants either encoded scenes in the evening, slept overnight, and took a recognition test the next morning, or encoded scenes in the morning and remained awake during a comparably long retention interval. Additional control groups were tested after a 20 min delay in the morning or evening, to control for time-of-day effects. Together, results showed that there is a direct relation between resting cortisol at encoding and subsequent memory, only following a period of sleep. Through eye tracking, it was further determined that for negative stimuli, this beneficial effect of cortisol on subsequent memory may be due to cortisol strengthening the relation between where participants look during encoding and what they are later able to remember. Overall, results obtained by a combination of these methods uncovered an interactive effect of sleep and cortisol on memory consolidation.
Behavior, Issue 88, attention, consolidation, cortisol, emotion, encoding, glucocorticoids, memory, sleep, stress
Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues
Institutions: University of Zurich.
Mori's Uncanny Valley Hypothesis1,2
proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness
) (Figure 1
). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6
. One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7
. Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.
Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques
Methods to Explore the Influence of Top-down Visual Processes on Motor Behavior
Institutions: Rutgers University, Rutgers University, Rutgers University, Rutgers University, Rutgers University.
Kinesthetic awareness is important to successfully navigate the environment. When we interact with our daily surroundings, some aspects of movement are deliberately planned, while others spontaneously occur below conscious awareness. The deliberate component of this dichotomy has been studied extensively in several contexts, while the spontaneous component remains largely under-explored. Moreover, how perceptual processes modulate these movement classes is still unclear. In particular, a currently debated issue is whether the visuomotor system is governed by the spatial percept produced by a visual illusion or whether it is not affected by the illusion and is governed instead by the veridical percept. Bistable percepts such as 3D depth inversion illusions (DIIs) provide an excellent context to study such interactions and balance, particularly when used in combination with reach-to-grasp movements. In this study, a methodology is developed that uses a DII to clarify the role of top-down processes on motor action, particularly exploring how reaches toward a target on a DII are affected in both deliberate and spontaneous movement domains.
Behavior, Issue 86, vision for action, vision for perception, motor control, reach, grasp, visuomotor, ventral stream, dorsal stream, illusion, space perception, depth inversion
Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind
Institutions: Massachusetts Eye and Ear Infirmary, Harvard Medical School, University of Chile .
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building's layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Medicine, Issue 73, Behavior, Neuroscience, Anatomy, Physiology, Neurobiology, Ophthalmology, Psychology, Behavior and Behavior Mechanisms, Technology, Industry, virtual environments, action video games, blind, audio, rehabilitation, indoor navigation, spatial cognitive map, Audio-based Environment Simulator, virtual reality, cognitive psychology, clinical techniques
A Standardized Obstacle Course for Assessment of Visual Function in Ultra Low Vision and Artificial Vision
Institutions: University of Pittsburgh, University of Pittsburgh.
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration.
preferred walking speed (PWS)
course speed (CS)
percentage preferred walking speed (PPWS)
Medicine, Issue 84, Obstacle course, navigation assessment, BrainPort, wayfinding, low vision
Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults
Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls
An Affordable HIV-1 Drug Resistance Monitoring Method for Resource Limited Settings
Institutions: University of KwaZulu-Natal, Durban, South Africa, Jembi Health Systems, University of Amsterdam, Stanford Medical School.
HIV-1 drug resistance has the potential to seriously compromise the effectiveness and impact of antiretroviral therapy (ART). As ART programs in sub-Saharan Africa continue to expand, individuals on ART should be closely monitored for the emergence of drug resistance. Surveillance of transmitted drug resistance to track transmission of viral strains already resistant to ART is also critical. Unfortunately, drug resistance testing is still not readily accessible in resource limited settings, because genotyping is expensive and requires sophisticated laboratory and data management infrastructure. An open access genotypic drug resistance monitoring method to manage individuals and assess transmitted drug resistance is described. The method uses free open source software for the interpretation of drug resistance patterns and the generation of individual patient reports. The genotyping protocol has an amplification rate of greater than 95% for plasma samples with a viral load >1,000 HIV-1 RNA copies/ml. The sensitivity decreases significantly for viral loads <1,000 HIV-1 RNA copies/ml. The method described here was validated against a method of HIV-1 drug resistance testing approved by the United States Food and Drug Administration (FDA), the Viroseq genotyping method. Limitations of the method described here include the fact that it is not automated and that it also failed to amplify the circulating recombinant form CRF02_AG from a validation panel of samples, although it amplified subtypes A and B from the same panel.
Medicine, Issue 85, Biomedical Technology, HIV-1, HIV Infections, Viremia, Nucleic Acids, genetics, antiretroviral therapy, drug resistance, genotyping, affordable
Cross-Modal Multivariate Pattern Analysis
Institutions: University of Southern California.
Multivariate pattern analysis (MVPA) is an increasingly popular method of analyzing functional magnetic resonance imaging (fMRI) data1-4
. Typically, the method is used to identify a subject's perceptual experience from neural activity in certain regions of the brain. For instance, it has been employed to predict the orientation of visual gratings a subject perceives from activity in early visual cortices5
or, analogously, the content of speech from activity in early auditory cortices6
Here, we present an extension of the classical MVPA paradigm, according to which perceptual stimuli are not predicted within, but across sensory systems. Specifically, the method we describe addresses the question of whether stimuli that evoke memory associations in modalities other than the one through which they are presented induce content-specific activity patterns in the sensory cortices of those other modalities. For instance, seeing a muted video clip of a glass vase shattering on the ground automatically triggers in most observers an auditory image of the associated sound; is the experience of this image in the "mind's ear" correlated with a specific neural activity pattern in early auditory cortices? Furthermore, is this activity pattern distinct from the pattern that could be observed if the subject were, instead, watching a video clip of a howling dog?
In two previous studies7,8
, we were able to predict sound- and touch-implying video clips based on neural activity in early auditory and somatosensory cortices, respectively. Our results are in line with a neuroarchitectural framework proposed by Damasio9,10
, according to which the experience of mental images that are based on memories - such as hearing the shattering sound of a vase in the "mind's ear" upon seeing the corresponding video clip - is supported by the re-construction of content-specific neural activity patterns in early sensory cortices.
Neuroscience, Issue 57, perception, sensory, cross-modal, top-down, mental imagery, fMRI, MRI, neuroimaging, multivariate pattern analysis, MVPA