Spatial processing resolution of a particular object in the visual field can differ considerably due to eye movements. The same object will be represented with high acuity in the fovea but only coarsely in periphery. Herwig and Schneider (in press) proposed that the visual system counteracts such resolution differences by predicting, based on previous experience, how foveal objects will look in the periphery and vice versa. They demonstrated that previously learned transsaccadic associations between peripheral and foveal object information facilitate performance in visual search, irrespective of the correctness of these associations. False associations were learned by replacing the presaccadic object with a slightly different object during the saccade. Importantly, participants usually did not notice this object change. This raises the question of whether perception of object continuity is a critical factor in building transsaccadic associations. We disturbed object continuity during learning with a postsaccadic blank or a task-irrelevant shape change. Interestingly, visual search performance revealed that neither disruption of temporal object continuity (blank) nor disruption of spatial object continuity (shape change) impaired transsaccadic learning. Thus, transsaccadic learning seems to be a very robust default mechanism of the visual system that is probably related to the more general concept of action-effect learning.
In recent years, researchers have become increasingly interested in the effects that deviations from expectations have on cognitive processing and, in particular, on the deployment of attention. Previous evidence for a surprise-attention link had been based on indirect measures of attention allocation. Here we used eyetracking to directly observe the impact of a novel color on its unannounced first presentation, which we regarded as a surprise condition. The results show that the novel color was quickly responded to with an eye movement, and that gaze was not turned away for a considerable amount of time. These results are direct evidence that deviations from expectations bias attentional priorities and lead to enhanced processing of the deviating stimulus.
When we move our eyes, we process objects in the visual field with different spatial resolution due to the nonhomogeneity of our visual system. In particular, peripheral objects are only coarsely represented, whereas they are represented with high acuity when foveated. To keep track of visual features of objects across eye movements, these changes in spatial resolution have to be taken into account. Here, we develop and test a new framework proposing a visual feature prediction mechanism based on past experience to deal with changes in spatial resolution accompanying saccadic eye movements. In 3 experiments, we first exposed participants to an altered visual stimulation where, unnoticed by participants, 1 object systematically changed visual features during saccades. Experiments 1 and 2 then demonstrate that feature prediction during peripheral object recognition is biased toward previously associated postsaccadic foveal input and that this effect is particularly associated with making saccades. Moreover, Experiment 3 shows that during visual search, feature prediction is biased toward previously associated presaccadic peripheral input. Together, these findings demonstrate that the visual system uses past experience to predict how peripheral objects will look in the fovea, and what foveal search templates should look like in the periphery. As such, they support our framework based on ideomotor theory and shed new light on the mystery of why we are most of the time unaware of acuity limitations in the periphery and of our ability to locate relevant objects in the periphery.
Recognition of a second target (T2) can be impaired if presented within 500 ms after a first target (T1): This interference phenomenon is called the attentional blink (AB; e.g., Raymond, Shapiro, & Arnell, 1992) and can be viewed as emerging from limitations in the allocation of visual attention (VA) over time. AB tasks typically require participants to detect or identify targets based on their visual properties, i.e., pattern recognition. However, no study so far has investigated whether an AB for pattern recognition of T2 can be elicited if T1 implies a second major function of the visual system, i.e., spatial computations. Therefore, we tested in two experiments whether localization of a peripherally presented dot (T1) interferes with the identification of a trailing centrally presented letter T2. For Experiment 1, T2 performance increased with onset asynchrony of both targets in single-task (only report letter) and dual-task conditions. Besides this task-independent T2 deficit, task-dependent interference (difference between single- and dual-task conditions) was observed in Experiment 2, when T1 was followed by location distractors. Overall, our results indicate that limitations in the allocation of VA over time (i.e., an AB) can also be found if T1 requires localization while T2 requires the standard pattern recognition task. The results are interpreted on the basis of a common temporal attentional mechanism for pattern recognition and spatial computations.
The immediate experience of self-agency, that is, the experience of generating and controlling our actions, is thought to be a key aspect of selfhood. It has been suggested that this experience is intimately linked to internal motor signals associated with the ongoing actions. These signals should lead to an attenuation of the sensory consequences of ones own actions and thereby allow classifying them as self-generated. The discovery of shared representations of actions between self and other, however, challenges this idea and suggests similar attenuation of ones own and others sensory action effects. Here, we tested these assumptions by comparing sensory attenuation of self-generated and observed sensory effects. More specifically, we compared the loudness perception of sounds that were either self-generated, generated by another person or a computer. In two experiments, we found a reduced perception of loudness intensity specifically related to self-generation. Furthermore, the perception of sounds generated by another person and a computer did not differ from each other. These findings indicate that ones own agentive influence upon the outside world has a special perceptual quality which distinguishes it from any sort of external influence, including human and non-human sources. This suggests that a real sense of self-agency is not a socially shared but rather a unique and private experience.
The experience of oneself as an agent not only results from interactions with the inanimate environment, but often takes place in a social context. Interactions with other people have been suggested to play a key role in the construal of self-agency. Here, we investigated the influence of social interactions on sensory attenuation of action effects as a marker of pre-reflective self-agency. To this end, we compared the attenuation of the perceived loudness intensity of auditory action effects generated either by oneself or another person in either an individual, non-interactive or interactive action context. In line with previous research, the perceived loudness of self-generated sounds was attenuated compared to sounds generated by another person. Most importantly, this effect was strongly modulated by social interactions between self and other. Sensory attenuation of self- and other-generated sounds was increased in interactive as compared to the respective individual action contexts. This is the first experimental evidence suggesting that pre-reflective self-agency can extend to and is shaped by interactions between individuals.
We move our eyes not only to get information, but also to supply information to our fellows. The latter eye movements can be considered as goal-directed actions to elicit changes in our counterparts. In two eye-tracking experiments, participants looked at neutral faces that changed facial expression 100 ms after the gaze fell upon them. We show that participants anticipate a change in facial expression and direct their first saccade more often to the mouth region of a neutral face about to change into a happy one and to the eyebrows region of a neutral face about to change into an angry expression. Moreover, saccades in response to facial expressions are initiated more quickly to the position where the expression was previously triggered. Saccade-effect associations are easily acquired and are used to guide the eyes if participants freely select where to look next (Experiment 1), but not if saccades are triggered by external stimuli (Experiment 2).
Recent work indicates that covert visual attention and eye movements on the one hand, and covert visual attention and visual working memory on the other hand are closely interrelated. Two experiments address the question whether all three processes draw on the same spatial representations. Participants had to memorize a target location for a subsequent memory-guided saccade. During the memory interval, task-irrelevant distractors were briefly flashed on some trials either near or remote to the memory target. Results showed that the previously flashed distractors attract the saccades landing position. However, attraction was only found, if the distractor was presented within a sector of +/-20 degrees around the target axis, but not if the distractor was presented outside this sector. This effect strongly resembles the global effect in which saccades are directed to intermediate locations between a target and a simultaneously presented neighboring distractor stimulus. It is argued that covert visual attention, eye movements, and visual working memory recruit the same spatial mechanisms that can probably be ascribed to attentional priority maps.
In tool use, the intended external goals have to be transformed into bodily movements by taking into account the target-to-movement mapping implemented by the tool. In bimanual tool use, this mapping may depend on the part of the tool that is operated and the effector used (e.g. the left and right hand at the handle bar moving in opposite directions in order to generate the same bicycle movement). In our study, we investigated whether participants represent the behaviour of the tool or only the effector-specific mapping when using two-handed tools. In three experiments, participants touched target locations with a two-jointed lever, using either the left or the right hand. In one condition, the joint of the lever was constant and switching between hands was associated with switching the target-to-movement-mapping, whereas in another condition, switching between hands was associated with switching the joint, but the target-to-movement-mapping remained constant. Results indicate pronounced costs of switching hands in the condition with constant joint, whereas they were smaller with constant target-to-movement mapping. These results suggest that participants have tool-independent representations of the effector-specific mappings.
Human actions may be carried out in response to exogenous stimuli (stimulus based) or they may be selected endogenously on the basis of the agents intentions (intention based). We studied the functional differences between these two types of action during action-effect (ideomotor) learning. Participants underwent an acquisition phase, in which each key-press (left/right) triggered a specific tone (low pitch/high pitch) either in a stimulus-based or in an intention-based action mode. Consistent with previous findings, we demonstrate that auditory action effects gain the ability to prime their associated responses in a later test phase only if the actions were selected endogenously during acquisition phase. Furthermore, we show that this difference in ideomotor learning is not due to different attentional demands for stimulus-based and intention-based actions. Our results suggest that ideomotor learning depends on whether or not the action is selected in the intention-based action mode, whereas the amount of attention devoted to the action-effect is less important.
According to ideomotor theory, action-effect associations are crucial for voluntary action control. Recently, a number of studies started to investigate the conditions that mediate the acquisition and application of action-effect associations by comparing actions carried out in response to exogenous stimuli (stimulus-based) with actions selected endogenously (intention-based). There is evidence that the acquisition and/or application of action-effect associations is boosted when acting in an intention-based action mode. For instance, bidirectional action-effect associations were diagnosed in a forced choice test phase if participants previously experienced action-effect couplings in an intention-based but not in a stimulus-based action mode. The present study aims at investigating effects of the action mode on action-effect associations in more detail. In a series of experiments, we compared the strength and durability of short-term action-effect associations (binding) immediately following intention- as well as stimulus-based actions. Moreover, long-term action-effect associations (learning) were assessed in a subsequent test phase. Our results show short-term action-effect associations of equal strength and durability for both action modes. However, replicating previous results, long-term associations were observed only following intention-based actions. These findings indicate that the effect of the action mode on long-term associations cannot merely be a result of accumulated short-term action-effect bindings. Instead, only those episodic bindings are selectively perpetuated and retrieved that integrate action-relevant aspects of the processing event, i.e., in case of intention-based actions, the link between action and ensuing effect.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.